• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 501
  • 201
  • 111
  • 59
  • 55
  • 39
  • 38
  • 31
  • 19
  • 16
  • 14
  • 13
  • 8
  • 6
  • 6
  • Tagged with
  • 1292
  • 142
  • 120
  • 120
  • 116
  • 112
  • 108
  • 106
  • 93
  • 85
  • 80
  • 80
  • 73
  • 70
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Using the features of translated language to investigate translation expertise : a corpus-based study / K.R. Redelinghuys

Redelinghuys, Karien Reinette January 2013 (has links)
Research based on translation expertise, which is also sometimes referred to as translation competence, has been a growing area of investigation in translation studies. These studies have not only focused on how translation expertise may be conceptualised and defined, but also on how this expertise is acquired and developed by translators. One of the key observations that arise from an overview of current research in the field of translation expertise is the prevalence of process-oriented methodologies in the field, with product-oriented methodologies used comparatively infrequently. This study is based on the assumption that product-oriented methodologies, and specifically the corpus-based approach, may provide new insights into translation expertise. The study therefore sets out to address the lack of comprehensive and systematic corpus-based analyses of translation expertise. One of the foremost concerns of corpus-based translation studies has been the investigation of what is known as the features of translated language which are often categorised as: explicitation, simplification, normalisation and levelling-out. The main objective of this study is to investigate the hypothesis that the features of translated language can be taken as an index of translation expertise. The hypothesis is founded on the premise that if the features of translated language are considered to be the textual traces of translation strategies, then the different translation strategies associated with different levels of translation expertise will be reflected in different frequencies and distributions of these features of translated language in the work of experienced and inexperienced translators. The study therefore aimed to determine if there are significant differences in the frequency and distribution of the features of translated language in the work of experienced and inexperienced translators. As background to this main research question, the study also investigated a secondary hypothesis in which translated language demonstrates unique features that are the consequence of various aspects of the translation process. A custom-built comparable English corpus was used for the study, comprising three subcorpora: translations by experienced translators, translations by inexperienced translators, and non-translations. A selection of linguistic operationalization’s was chosen for each of the four features of translated language. The differences in the frequency and distribution of these linguistic operationalization’s in the three sub corpora were analysed by means of parametric or non-parametric ANOVA. The findings of the study provide some support for both hypotheses. In terms of the translation expertise hypothesis, some of the features of translated language demonstrate significantly different frequencies in the work of experienced translators compared to the work of inexperienced translators. It was found that experienced translators are less explicit in terms of: formal completeness, simplify less frequently because they use a more varied vocabulary, use longer sentences and have a lower readability index score on their translations, and use contractions more frequently, which signals that they normalise less than inexperienced translators. However, experienced translators also use neologisms and loanwords less frequently than inexperienced translators, which is suggestive of normalisation occurring more often in the work of experienced translators when it comes to lexical creativity. These linguistic differences are taken as indicative of the different translation strategies used by the two groups of translators. It is believed that the differences are primarily caused by variations in experienced and inexperienced translators‟ sensitivity to translation norms, their awareness of written language conventions, their language competence (which involves syntactic, morphological and vocabulary knowledge), and their sensitivity to register. Furthermore, it was also found that there are indeed significant differences between translated and non-translated language, which also provides support for the second hypothesis investigated in this study. Translators explicitate more frequently than non-translators in terms of formal completeness, tend to have a less extensive vocabulary, tend to raise the overall formality of their translations, and produce texts that are less creative and more conformist than non-translators‟ texts. However, statistical support is lacking for the hypothesis that translators explicitate more at the propositional level than original text producers do, as well as for the hypothesis that translators are inclined to use a more neutral middle register. / MA (Language Practice), North-West University, Vaal Triangle Campus, 2013
252

Investigation into the role of sequence-driven-features and amino acid indices for the prediction of structural classes of proteins

Nanuwa, Sundeep January 2013 (has links)
The work undertaken within this thesis is towards the development of a representative set of sequence driven features for the prediction of structural classes of proteins. Proteins are biological molecules that make living things function, to determine the function of a protein the structure must be known because the structure dictates its physical capabilities. A protein is generally classified into one of the four main structural classes, namely all-α, all-β, α + β or α / β, which are based on the arrangements and gross content of the secondary structure elements. Current methods manually assign the structural classes to the protein by manual inspection, which is a slow process. In order to address the problem, this thesis is concerned with the development of automated prediction of structural classes of proteins and extraction of a small but robust set of sequence driven features by using the amino acid indices. The first main study undertook a comprehensive analysis of the largest collection of sequence driven features, which includes an existing set of 1479 descriptor values grouped by ten different feature groups. The results show that composition based feature groups are the most representative towards the four main structural classes, achieving a predictive accuracy of 63.87%. This finding led to the second main study, development of the generalised amino acid composition method (GAAC), where amino acid index values are used to weigh corresponding amino acids. GAAC method results in a higher accuracy of 68.02%. The third study was to refine the amino acid indices database, which resulted in the highest accuracy of 75.52%. The main contributions from this thesis are the development of four computationally extracted sequence driven feature-sets based on the underused amino acid indices. Two of these methods, GAAC and the hybrid method have shown improvement over the usage of traditional sequence driven features in the context of smaller and refined feature sizes and classification accuracy. The development of six non-redundant novel sets of the amino acid indices dataset, of which each are more representative than the original database. Finally, the construction of two large 25% and 40% homology datasets consisting over 5000 and 7000 protein samples, respectively. A public webserver has been developed located at http://www.generalised-protein-sequence-features.com, which allows biologists and bioinformaticians to extract GAAC sequence driven features from any inputted protein sequence.
253

Common Features in Vector Nonlinear Time Series Models

Li, Dao January 2013 (has links)
This thesis consists of four manuscripts in the area of nonlinear time series econometrics on topics of testing, modeling and forecasting nonlinear common features. The aim of this thesis is to develop new econometric contributions for hypothesis testing and forecasting in these area. Both stationary and nonstationary time series are concerned. A definition of common features is proposed in an appropriate way to each class. Based on the definition, a vector nonlinear time series model with common features is set up for testing for common features. The proposed models are available for forecasting as well after being well specified. The first paper addresses a testing procedure on nonstationary time series. A class of nonlinear cointegration, smooth-transition (ST) cointegration, is examined. The ST cointegration nests the previously developed linear and threshold cointegration. An Ftypetest for examining the ST cointegration is derived when stationary transition variables are imposed rather than nonstationary variables. Later ones drive the test standard, while the former ones make the test nonstandard. This has important implications for empirical work. It is crucial to distinguish between the cases with stationary and nonstationary transition variables so that the correct test can be used. The second and the fourth papers develop testing approaches for stationary time series. In particular, the vector ST autoregressive (VSTAR) model is extended to allow for common nonlinear features (CNFs). These two papers propose a modeling procedure and derive tests for the presence of CNFs. Including model specification using the testing contributions above, the third paper considers forecasting with vector nonlinear time series models and extends the procedures available for univariate nonlinear models. The VSTAR model with CNFs and the ST cointegration model in the previous papers are exemplified in detail,and thereafter illustrated within two corresponding macroeconomic data sets.
254

Lipid Residues Preserved in Sheltered Bedrock Features at Gila Cliff Dwellings National Monument, New Mexico

Buonasera, Tammy 31 October 2016 (has links)
Bedrock features represent various economic, social, and symbolic aspects of past societies, but have historically received little study, particularly in North America. Fortunately, new techniques for analyzing spatial configurations, use-wear, and organic residues are beginning to unlock more of the interpretive potential of these features. Though preliminary in nature, the present study contributes to this trend by documenting an application of lipid analysis to bedrock features in a dry rockshelter. Results of this initial application indicate that bedrock features in dry rockshelters may provide especially favorable conditions for the preservation and interpretation of ancient organic residues. Abundant lipids, comparable to concentrations present in some pottery sherds, were extracted from a bedrock grinding surface at Gila Cliff Dwellings National Monument and analyzed using gas chromatography-mass spectrometry. Though the lipids were highly oxidized, degradation products indicative of former unsaturated fatty acids were retained. Comparisons to experimentally aged residues, and absence of a known biomarker for maize, indicate that the bulk of the lipids preserved in the milling surface probably derive from processing an oily nut or seed resource, and not from processing maize. Substantially lower amounts of lipids were recovered from a small, blackened cupule. It is hypothesized that some portion of the lipids in the blackened cupule was deposited from condensed smoke of cooking and heating fires in the caves. Potential for the preservation of organic residues in similar sheltered bedrock contexts is discussed, and a practical method for sampling bedrock features in the field is described.
255

Exploiting abstract syntax trees to locate software defects

Shippey, Thomas Joshua January 2015 (has links)
Context. Software defect prediction aims to reduce the large costs involved with faults in a software system. A wide range of traditional software metrics have been evaluated as potential defect indicators. These traditional metrics are derived from the source code or from the software development process. Studies have shown that no metric clearly out performs another and identifying defect-prone code using traditional metrics has reached a performance ceiling. Less traditional metrics have been studied, with these metrics being derived from the natural language of the source code. These newer, less traditional and finer grained metrics have shown promise within defect prediction. Aims. The aim of this dissertation is to study the relationship between short Java constructs and the faultiness of source code. To study this relationship this dissertation introduces the concept of a Java sequence and Java code snippet. Sequences are created by using the Java abstract syntax tree. The ordering of the nodes within the abstract syntax tree creates the sequences, while small sub sequences of this sequence are the code snippets. The dissertation tries to find a relationship between the code snippets and faulty and non-faulty code. This dissertation also looks at the evolution of the code snippets as a system matures, to discover whether code snippets significantly associated with faulty code change over time. Methods. To achieve the aims of the dissertation, two main techniques have been developed; finding defective code and extracting Java sequences and code snippets. Finding defective code has been split into two areas - finding the defect fix and defect insertion points. To find the defect fix points an implementation of the bug-linking algorithm has been developed, called S + e . Two algorithms were developed to extract the sequences and the code snippets. The code snippets are analysed using the binomial test to find which ones are significantly associated with faulty and non-faulty code. These techniques have been performed on five different Java datasets; ArgoUML, AspectJ and three releases of Eclipse.JDT.core Results. There are significant associations between some code snippets and faulty code. Frequently occurring fault-prone code snippets include those associated with identifiers, method calls and variables. There are some code snippets significantly associated with faults that are always in faulty code. There are 201 code snippets that are snippets significantly associated with faults across all five of the systems. The technique is unable to find any significant associations between code snippets and non-faulty code. The relationship between code snippets and faults seems to change as the system evolves with more snippets becoming fault-prone as Eclipse.JDT.core evolved over the three releases analysed. Conclusions. This dissertation has introduced the concept of code snippets into software engineering and defect prediction. The use of code snippets offers a promising approach to identifying potentially defective code. Unlike previous approaches, code snippets are based on a comprehensive analysis of low level code features and potentially allow the full set of code defects to be identified. Initial research into the relationship between code snippets and faults has shown that some code constructs or features are significantly related to software faults. The significant associations between code snippets and faults has provided additional empirical evidence to some already researched bad constructs within defect prediction. The code snippets have shown that some constructs significantly associated with faults are located in all five systems, and although this set is small finding any defect indicators that transfer successfully from one system to another is rare.
256

Towards Epistemic and Interpretative Holism : A critique of methodological approaches in research on learning / Epistemisk holism och tolkningsholism : En kritik av metodologiska ansatser i forskning om lärande

Haglund, Liza January 2017 (has links)
The central concern of this thesis is to discuss interpretations of learning in educational research. A point of departure is taken in core epistemological and ontological assumptions informing three major approaches to learning: behaviourism, cognitive constructivism and socioculturalism. It is argued that all three perspectives provide important insights into research on learning, but each alone runs the risk of reducing learning and interpretations of learning to single aspects. Specific attention is therefore given to Intentional Analysis, as it has been developed to account for sociocultural aspects that influence learning and individual cognition. It is argued that interpretations of learning processes face challenges, different kinds of holism, underdetermination and the complexity of intentionality, that need to be accounted for in order to make valid interpretations. Interpretation is therefore also discussed in light of philosopher Donald Davidson’s theories of knowledge and interpretation. It is suggested that his theories may provide aspects of an ontological and epistemological stance that can form the basis for interpretations of learning in educational research. A first brief sketch, referred to as ‘epistemic holism’, is thus drawn. The thesis also exemplifies how such a stance can inform empirical research. It provides a first formulation of research strategies – a so-called ‘interpretative holism’. The thesis discusses what such a stance may imply with regard to the nature and location of knowledge and the status of the learning situation. Ascribing meaning to observed behaviour, as it is described in this thesis, implies that an action is always an action under a specific description. Different descriptions may not be contradictory, but if we do not know the learner’s language use, we cannot know whether there is a difference in language or in beliefs. It is argued that the principle of charity and reference to saliency, that is, what appears as the figure for the learner, may help us decide. However, saliency does not only appear as a phenomenon in relation to physical objects and events, but also in the symbolic world, thus requires that the analysis extend beyond the mere transcription of an interview or the description of an observation. Hence, a conclusion to be drawn from this thesis is that the very question of what counts as data in the interpretation of complex learning processes is up for discussion. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 4: Manuscript.</p>
257

Common features in vector nonlinear time series models

Li, Dao January 2013 (has links)
This thesis consists of four manuscripts in the area of nonlinear time series econometrics on topics of testing, modeling and forecasting nonlinear common features. The aim of this thesis is to develop new econometric contributions for hypothesis testing and forecasting in thesearea. Both stationary and nonstationary time series are concerned. A definition of common features is proposed in an appropriate way to each class. Based on the definition, a vector nonlinear time series model with common features is set up for testing for common features. The proposed models are available for forecasting as well after being well specified. The first paper addresses a testing procedure on nonstationary time series. A class of nonlinear cointegration, smooth-transition (ST) cointegration, is examined. The ST cointegration nests the previously developed linear and threshold cointegration. An F-type test for examining the ST cointegration is derived when stationary transition variables are imposed rather than nonstationary variables. Later ones drive the test standard, while the former ones make the test nonstandard. This has important implications for empirical work. It is crucial to distinguish between the cases with stationary and nonstationary transition variables so that the correct test can be used. The second and the fourth papers develop testing approaches for stationary time series. In particular, the vector ST autoregressive (VSTAR) model is extended to allow for common nonlinear features (CNFs). These two papers propose a modeling procedure and derive tests for the presence of CNFs. Including model specification using the testing contributions above, the third paper considers forecasting with vector nonlinear time series models and extends the procedures available for univariate nonlinear models. The VSTAR model with CNFs and the ST cointegration model in the previous papers are exemplified in detail, and thereafter illustrated within two corresponding macroeconomic data sets.
258

Utilisation of digital media in improving children's reading habits

Jurf, Dima Rafat Mohammad January 2012 (has links)
Although digital media has been exploited to improve digital libraries, social networking sites, and book promotion for adult and child stakeholders, but encouraging children who have the choice to either read from a book or on a screen remains limited worldwide, including Jordan. This interest has meant that data about children's reading habits were needed, and the present study was intended as a contribution towards this aim. Interviews were conducted with Jordanian writers, publishers, child specialists, and various children's cultural centres. The managers and personnel unanimously showed that Jordanian children are not good readers and that a limited number of books are published for children as there are actual boundaries preventing Jordanian writers from publishing books. In particular, subjecting the typical sorts of children's websites - 'Club Penguin', 'PBS Kids', 'A Story before Bed', 'Baraem', 'Storyline Online', and 'Raneen' - to evaluation showed that 'Club Penguin' got the highest rank among the other websites in terms of multimodal features, usability, and language, while 'PBS Kids' got the highest rank regarding interactivity, and 'A Story before Bed' got the highest rank in reading activities. Although it was realised that most children were satisfied with the aspects of usability and ease of use rather than the structure or the aesthetic of the website, and were more attracted to the websites that provide multimodal features such as special characters, narration, gesture, and interactivity. The targeted websites' parameters obtained from the survey were used as guidance in the design structure of the KITABAK website, as a virtual reading environment for children's reading practices. The evaluation results that were obtained showed that there is a significant correlation towards encouraging children's reading habits and reading from printed books accompanying the website; girls showed more interest in reading iv than boys; and there is an obvious willingness for the adaptation of the website as a part of the Jordanian school curriculum. In addition, the KITABAK website was accepted significantly more than 'Club Penguin', mainly because the KITABAK website has facilities, games and reading activities. Also, results showed that children who were subjected to testing the KITABAK website for a one-week period proved to accept the website significantly more than those who were subjected to testing it once.
259

Feature competition in Algonquian agreement

Xu, Yadong 12 September 2016 (has links)
This thesis investigates the patterning of the Algonquian “central agreement”, i.e. the primary person-number agreement marking, from a diachronic and comparative perspective. The central agreement patterns differ in the two orders: in the conjunct it is fusional and often portmanteau while in the independent it is discontinuous and non-portmanteau. In addition to these differences, there are also some commonalities, such as a pattern in which 1p consistently outranks 2p in both orders. This thesis shows that the differences between the two orders can be taken to reflect variation in the features of the syntactic probe and different morphological spell-out rules, while the shared properties follow from the underlying structure of φ-features. In particular, it is proposed that an additional person feature under the [plural] node causes first person plural to be privileged over second person plural in the competition among vocabulary items in post-syntactic spellout. / October 2016
260

Produkce amoniaku koloniemi mutantů a stárnutí strukturovaných kolonií Saccharomyces cerevisiae / Ammonia production by colonies of mutants and aging of wrinkled colonies of Saccharomyces cerevisiae

Nedbálková, Jana January 2010 (has links)
Production of ammonia by the colonies of mutants and aging of wrinkled colonies of Saccharomyces cerevisiae The aim of this diploma thesis is to observe the development, respectively the aging of cells in yeast colonies Saccharomyces cerevisiae. Yeast cells S. cerevisiea form multicellular organized structures on a solid substrate, i.e. colonies, which the intercellular interactions occur in. These interactions influence forming, morphology and aging of yeast colonies. This diploma thesis is focused partly on the changes in ammonia production by giant colonies of deletion mutants and partly on the aging of colonies with the wrinkled morphology. I characterized mutant strains of S. cerevisiae with the deletion in RTG1, RTG2, RTG3, FIS1, CIT2 genes. Their products play an important role in the colony development. The transcription of these genes changes during the transition from the acidic to alkali phase during developmental process of the colonies. I have found out that the ammonium production rate was in accordance with the results of the alkalization in giant colonies surroundings and mentioned mutants derived from the BY strain has been producing ammonia since the 15th day. The rate of the ammonia production by rtg3∆ strain was comparable to the parental strain. Compared to parental strain, lower...

Page generated in 0.0365 seconds