• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Time-varying linear predictive coding of speech signals.

Hall, Mark Gilbert January 1977 (has links)
Thesis. 1977. M.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / M.S.
292

Ajuste do modelo de Orskov & McDonald (1979) a dados de degradação ruminal in situ utilizando mínimos quadrados ponderados / Orskov and McDonald?s model adjustment to ruminal degradation in situ data using weighed least squares

Soares, Ana Paula Meira 27 September 2007 (has links)
O presente trabalho teve como principal objetivo o estudo das diferenças entre os resultados obtidos com o uso do método dos mínimos quadrados ponderados e de mínimos quadrados ordinários, no ajuste do modelo de Orskov e McDonald (1979) aos dados de degradação da matéria seca (MS) e fibra em detergente ácido (FDA) em novilhos Nelore fistulados, utilizando a técnica in situ. Foram utilizados os dados de um experimento delineado em quadrado latino 4x4 (quatro animais e quatro períodos) cujos tratamentos foram: dieta com sal de cálcio de ácidos graxos e monensina (A); dieta com caroço de algodão e monensina (B); dieta controle com monensina (C) e dieta com caroço de algodão sem monensina (D). As medidas de degradabilidade foram coletadas em oito ocasiões (0, 3, 6, 12, 24, 48, 72 e 96 horas). Como essas medidas são obtidas repetidamente no mesmo animal, espera-se que as variâncias das respostas nas diversas ocasiões não sejam iguais. Nas análises propostas foram utilizados os dados originais (MS e FDA) e os dados corrigidos para os efeitos de animal e de período. De uma forma geral, observou-se que o uso do método dos mínimos quadrados ponderados alterou os resultados das análises, produzindo um aumento das estatísticas dos testes e uma alteração da significância dessas estatísticas, por conta da retirada do efeito de animal e período dos dados originais e ao uso do método de mínimos quadrados ponderados, com a ponderação feita pelo inverso da variância dos dados em cada ocasião. / The present work had as main objective the study of the differences between the results obtained using the method of the weighted least squares and ordinary least squares, in the fit of the model of Orskov and McDonald (1979) to the data of degradation of the dry matter (MS) and acid detergent fiber (ADF) in fistulated Nelore steers, using the technique in situ. The data of a delineated 4x4 Latin Square had been used (four animals and four periods) whose treatments had been: diet with calcium salt of fatty acid and monensin (A); diet with whole cottonseed and monensin (B); diet has control with monensin (C) and diet with whole cottonseed without monensin (D). The measures of degradability had been collected in eight occasions (0, 3, 6, 12, 24, 48, 72 and 96 hours). As these measures they are gotten repeatedly in the same animal, expects that the variances of the answers in the diverse occasions are not equal. In the analyses proposals the original data (MS and ADF) and the data corrected for the period and animal effect had been used. Of a general form, it was observed that the use of the method of the weighted least squares modified the results of the analyses, producing an increase of the statisticians of the tests and an alteration of the significance of these statisticians, for account of the withdrawal of the animal effect and period of the original data and to the use of the method of weighted least squares, with the weighted made for the inverse one of the variance of the given ones in each occasion.
293

Learning commonalities in RDF & SPARQL / Apprendre les points communs dans RDF et SPARQL

El Hassad, Sara 02 February 2018 (has links)
La recherche de points communs entre des descriptions de données ou de connaissances est un problème de raisonnement fondamental en Machine Learning, qui a été formalisé par G. Plotkin dans les années 70s sous la forme du calcul du plus petit généralisant de ces descriptions. L'identification des plus petits généralisants a un large panel d'applications qui vont de l'optimisation de requêtes (e.g., pour matérialiser les points communs entre des requêtes lors de la sélection de vues ou pour factoriser leur exécution dans un contexte d'accès concurrentiel), à la recommandation dans le contexte des réseaux sociaux (e.g. pour créer de liens entre des utilisateurs basées sur leurs points communs selon leur profil ou leurs recherches). Dans cette thèse nous avons revisité la notion du plus petit généralisant dans le contexte de Resource Description Framework (RDF) et le fragment conjonctif de son langage de requêtes associé SPARQL, alias Basic Graph Pattern (BGP) queries. Contrairement à l'état de l'art, nous ne considérons aucune restriction, ni structurelle ni sémantique, sur les graphes et les requêtes. Nos contributions incluent la définition et le calcul des plus petits généralisants dans ces deux formalismes ce qui revient à trouver le plus grand ensemble de points communs entre des bases de données incomplètes et des requêtes conjonctives en présence de contraintes déductives. Nous proposons également une évaluation expérimentale de nos contributions. / Finding commonalities between descriptions of data or knowledge is a fundamental task in Machine Learning. The formal notion characterizing precisely such commonalities is known as least general generalization of descriptions and was introduced by G. Plotkin in the early 70's, in First Order Logic. Identifying least general generalizations has a large scope of database applications ranging from query optimization (e.g., to share commonalities between queries in view selection or multi-query optimization), to recommendation in social networks (e.g., to establish connections between users based on their commonalities between proles or searches), through exploration (e.g., to classify/categorize datasets and to identify common social graph patterns between organizations (e.g., criminal ones)). In this thesis we revisit the notion of least general generalizations in the entire Resource Description Framework (RDF) and popular conjunctive fragment of SPARQL, a.k.a. Basic Graph Pattern (BGP) queries. By contrast to the literature, we do not restrict the structure nor semantics of RDF graphs and BGPQs. Our contributions include the denition and the computation of least general generalizations in these two settings, which amounts to nding the largest set of commonalities between incomplete databases and conjunctive queries, under deductive constraints. We also provide an experimental assessment of our technical contributions.
294

Heuristic discovery and design of promoters for the fine-control of metabolism in industrially relevant microbes

Gilman, James January 2018 (has links)
Predictable, robust genetic parts including constitutive promoters are one of the defining attributes of synthetic biology. Ideally, candidate promoters should cover a broad range of expression strengths and yield homogeneous output, whilst also being orthogonal to endogenous regulatory pathways. However, such libraries are not always readily available in non-model organisms, such as the industrially relevant genus Geobacillus. A multitude of different approaches are available for the identification and de novo design of prokaryotic promoters, although it may be unclear which methodology is most practical in an industrial context. Endogenous promoters may be individually isolated from upstream of well-understood genes, or bioinformatically identified en masse. Alternatively, pre-existing promoters may be mutagenised, or mathematical abstraction can be used to model promoter strength and design de novo synthetic regulatory sequences. In this investigation, bioinformatic, mathematic and mutagenic approaches to promoter discovery were directly compared. Hundreds of previously uncharacterised putative promoters were bioinformatically identified from the core genome of four Geobacillus species, and a rational sampling method was used to select sequences for in vivo characterisation. A library of 95 promoters covered a 2-log range of expression strengths when characterised in vivo using fluorescent reporter proteins. Data derived from this experimental characterisation were used to train Artificial Neural Network, Partial Least Squares and Random Forest statistical models, which quantifiably inferred the relationship between DNA sequence and function. The resulting models showed limited predictive- but good descriptive-power. In particular, the models highlighted the importance of sequences upstream of the canonical -35 and -10 motifs for determining promoter function in Geobacillus. Additionally, two commonly used mutagenic techniques for promoter production, Saturation Mutagenesis of Flanking Regions and error-prone PCR, were applied. The resulting sequence libraries showed limited promoter activity, underlining the difficulty of deriving synthetic promoters in species where understanding of transcription regulation is limited. As such, bioinformatic identification and deep-characterisation of endogenous promoter elements was posited as the most practical approach for the derivation of promoter libraries in non-model organisms of industrial interest.
295

Empirical studies on stock return predictability and international risk exposure

Lu, Qinye January 2016 (has links)
This thesis consists of one stock return predictability study and two international risk exposure studies. The first study shows that the statistical significance of out-of-sample predictability of market returns given by Kelly and Pruitt (2013), using a partial least squares methodology, constructed from the valuation ratios of portfolios, is overstated for two reasons. Firstly, the analysis is conducted on gross returns rather than excess returns, and this raises the apparent predictability of the equity premium due to the inclusion of predictable movements of interest rates. Secondly, the bootstrap statistics used to assess out-of-sample significance do not account for small-sample bias in the estimated coefficients. This bias is well known to affect in-sample tests of significance and I show that it is also important for out-of-sample tests of significance. Accounting for both these effects can radically change the conclusions; for example, the recursive out-of-sample R2 values for the sample period 1965-2010 are insignificant for the prediction of one-year excess returns, and one-month returns, except in the case of the book-to-market ratios of six size- and value-sorted portfolios which are significant at the 10% level. The second study examines whether U.S. common stocks are exposed to international risks, which I define as shocks to foreign markets that are orthogonal to U.S. market returns. By sorting stocks on past exposure to this risk factor I show that it is possible to create portfolios with an ex-post spread in exposure to international risk. I examine whether the international risk is priced in the cross-section of U.S. stocks, and find that for small stocks an increase in exposure to international risk results in lower returns relative to the Fama-French three-factor model. I conduct similar analysis on a measure of the international value premium and find little evidence of this risk being priced in U.S. stocks. The third study examines whether a portfolios of U.S. stocks can mimic foreign index returns, thereby providing investors with the benefits of international diversification without the need to invest directly in assets that trade abroad. I test this proposition using index data from seven developed markets and eight emerging markets over the period 1975-2013. Portfolios of U.S. stocks are constructed out-of-sample to mimic these international indices using a step-wise procedure that selects from a variety of industry portfolios, stocks of multinational corporations, country funds and American depositary receipts. I also use a partial least squares approach to form mimicking portfolios. I show that investors are able to gain considerable exposure to emerging market indices using domestically traded stocks. However, for developed market indices it is difficult to obtain home-made exposure beyond the simple exposure of foreign indices to the U.S. market factor. Using mean-variance spanning tests I find that, with few exceptions, international indices do not improve over the investment frontier provided by the domestically constructed alternative of investing in the U.S. market index and portfolios of industries and multinational corporations.
296

Estimating the determinants of FDI in Transition economies: comparative analysis of the Republic of Kosovo

Berisha, Jetëmira January 2012 (has links)
This study develops a panel data analysis over 27 transition and post transition economies for the period 2003-2010. Its intent is to investigate empirically the true effect of seven variables into foreign flows and takes later on the advantage of observed findings to conduct a comparative analysis between Kosovo and regional countries such: Albania, Bosnia and Herzegovina, Macedonia, Montenegro and Serbia. As the breakdown period (2008-2010) was included in the data set used to modelling the behaviour of FDI, both Chow test and the time dummies technique suggest the presence of structural break. Ultimately, empirical results show that FDI is positively related with one year lagged effect of real GDP growth, trade openness, labour force, low level of wages proxied by remittances, real interest rate and the low level of corruption. Besides, the corporate income tax is found to be significant and inversely related with foreign flows. The comparative analysis referring the growth rate of real GDP shows that Kosovo has the most stable macroeconomic environment in the region, but still it is continuously confronted by the high deficit of trade balance and high rate of unemployment. Appart, the key obstacle that has abolished efforts for foreign investment attraction is found to be the trade blockade of...
297

On the regularization of the recursive least squares algorithm. / Sobre a regularização do algoritmo dos mínimos quadrados recursivos.

Manolis Tsakiris 25 June 2010 (has links)
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization. / Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.
298

Exploring Patterns in Due Process Hearing Decisions Regarding the Usage of One-on-One Inclusion Aides for Students with Disabilities

Perkins, Joel K. 01 June 2017 (has links)
This study reviews due process hearing decisions from the years 2014 and 2015. This is primarily a legal analysis, specifically looking at legal and regulatory patterns regarding the provision of one-on-one special education aides for students with disabilities in general education settings. Our findings from the due process hearing decisions reveal that one-on-one aides for students with a wide variety of disabilities are being provided with greater frequency than we anticipated and that, specifically, behavioral aides are being provided for students with autism. Decisions of disabilities such as hearing impairment have higher provision rates, while other disabilities like autism and emotional disturbance do not see the same rate of provision. There are clear patterns of differences between the states in the number of cases that reach due process hearings and in the number of one-on-one aides provided.
299

A Qualitative Analysis of a Teacher Support Program for Educating Students with Emotional Disturbance in an Inclusive Setting

Harmon, Crystal Williams 20 March 2008 (has links)
This study examined the experiences of teachers who included students identified as having emotional disturbance in their classes while participating in a teacher support program. A secondary analysis of data collected throughout the duration of the support program was conducted to identify core issues teachers faced as they included students with emotional disturbance in their classes. The first stage of analysis involved pre-existing data from the support program. Data were organized into four periods which chronologically represented the teachers' experiences. From this data eight core themes were identified: concerns about the lack of instructional adaptations made for students with emotional disturbance; appropriate consequences for disruptive behavior in general education; type of additional student information teachers wanted; student readiness for inclusion; the need for a supportive environment; training needs for inclusion; class size pertaining to the number of students with ED in general education classes; and teacher feedback about the support program. To provide clarification and elaboration of these core issues, stage two consisted of a focus group of eight teachers who participated in the program. Identified strengths that contributed to the success of the support program included the role of the coordinator as support person for both students and teachers and the benefits of having a supportive environment for students with emotional disturbance to return to for extra assistance. Major conclusions from this study suggest that student readiness for inclusion, teacher support needed during inclusion, and teacher attitudes and beliefs about inclusion are critical components to the inclusion process. Implications for future research include identifying skills needed by students with emotional disturbance to transition to inclusive settings, examining the setting demands of the general education classroom, exploring students' perceptions of inclusion, and identifying effective practices for preparing teachers to work with students in inclusive settings.
300

The Meaning and Means of Inclusion for Students with Autism Spectrum Disorders: A Qualitative Study of Educators’ and Parents’ Attitudes, Beliefs, and Decision-Making Strategies

Sansosti, Jenine M 25 June 2008 (has links)
The practice of inclusion, and even the term itself, have been the subject of controversy over the last several decades and it appears that "inclusion" may look very different depending upon the student, educator, and setting (Fuchs & Fuchs, 1994). Recently, placement in general education settings has become a dominant service delivery model for individuals with Autism Spectrum Disorder (ASD), (Simpson & Myles, 1998), yet Individual Education Programs (IEPs) for students with ASD tend to be the most often disputed and often contain procedural errors, including failure to consider the Least Restrictive Mandate (Yell et al., 2003). This study represents a qualitative case study of a school district in West Central Florida working to build capacity for inclusive education. Qualitative case study methodology was used to explore (a) educators' definitions, attitudes, beliefs, and emotions regarding inclusion of students with ASD, (b) how the understandings and attitudes regarding inclusion impact the way educators make decisions about inclusion and educational programs for students with ASD, and (c) educators' and parents' criteria for determining "successful" inclusion and their perceptions about the success of current inclusion efforts. A team of educators (general education, special education, specialists, and administrators) who were involved in inclusion efforts were purposively selected for recruitment in this study. Two focus groups were conducted to engage them in discussion and decision-making regarding educational plans for students with ASD. Subsequently, semi-structured interviews were conducted individually with each member of the team as a follow-up to the focus group. Additionally, individual semi-structured interviews were conducted with parents of included students with ASD. Results indicated that educators understood inclusive education to be a highly individualized enterprise which is developed on a "case-by-case basis" but were generally positive about inclusion for students with ASD. Educator participants articulated the characteristics of students they believed to be "ideal inclusion candidates;" students' behavioral functioning and potential for disrupting typical peers was a major consideration. Parents and educators shared very similar goals for students with ASD, but shared stories suggesting their interactions often involve conflict and ill will. Implications for practice and recommendations for future research are offered.

Page generated in 0.0497 seconds