• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 46
  • 38
  • 37
  • 32
  • 28
  • 25
  • 24
  • 23
  • 21
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computational models for declarative languages and their formal specifications in CSP

Lee, M. K. O. January 1986 (has links)
No description available.
2

A Case Study of Compact Core French Models: A Pedagogic Perspective

Marshall, Pamela 10 January 2012 (has links)
The overriding objective of core French (CF) teaching in Canada since the National Core French Study (NCFS) is that of communicative competence (R. Leblanc, 1990). Results from the traditional form of CF, though, suggest that students are not developing desired levels of communicative competence in the drip-feed (short daily periods) model (Lapkin, Harley, & Taylor, 1993). The present study aims to investigate the role of compacted second language program formats in promoting higher levels of language proficiency and achievement among elementary core French students; in particular, the study investigates the pedagogic approach, based on the principle that longer class periods should facilitate a more communicative/ experiential teaching approach. Students in three Grade 7 classes served as participants. Two of the classes served as the compacted experimental classes, and the other as a comparison class. Pre-tests, immediate post-tests and delayed post-tests recorded differences in student achievement. A multi-dimensional, project-based curriculum approach was implemented in all three classes, and was recorded by teacher observations in her daybook and daily journal. Student attitudes toward their CF program format and their self-assessed language proficiency were measured during recorded focus group sessions and on student questionnaires. Parental and teacher perceptions of student attitudes were measured using a short survey. Results indicate that students in both the compact and comparison classes performed similarly, with few significant differences in measured language growth or retention over time. Parents of all classes indicated satisfaction with the teaching and learning activities, and with the program format in which their child was enrolled. Excerpts from the teacher daybook and reflective journal demonstrated that communicative activities fostering student interaction in the target language were more frequently and readily implemented in the longer compact CF periods. Students generally stated a preference for the program format in which they were enrolled, although only students in the compact classes outlined pedagogic reasons in support for their preference. Additionally, most students self-assessed a higher level of language competence than in previous years, which students in the compact (experimental) classes attributed to the longer class periods, stating that they promoted task completion, group work, in-depth projects and communicative activities.
3

A Case Study of Compact Core French Models: A Pedagogic Perspective

Marshall, Pamela 10 January 2012 (has links)
The overriding objective of core French (CF) teaching in Canada since the National Core French Study (NCFS) is that of communicative competence (R. Leblanc, 1990). Results from the traditional form of CF, though, suggest that students are not developing desired levels of communicative competence in the drip-feed (short daily periods) model (Lapkin, Harley, & Taylor, 1993). The present study aims to investigate the role of compacted second language program formats in promoting higher levels of language proficiency and achievement among elementary core French students; in particular, the study investigates the pedagogic approach, based on the principle that longer class periods should facilitate a more communicative/ experiential teaching approach. Students in three Grade 7 classes served as participants. Two of the classes served as the compacted experimental classes, and the other as a comparison class. Pre-tests, immediate post-tests and delayed post-tests recorded differences in student achievement. A multi-dimensional, project-based curriculum approach was implemented in all three classes, and was recorded by teacher observations in her daybook and daily journal. Student attitudes toward their CF program format and their self-assessed language proficiency were measured during recorded focus group sessions and on student questionnaires. Parental and teacher perceptions of student attitudes were measured using a short survey. Results indicate that students in both the compact and comparison classes performed similarly, with few significant differences in measured language growth or retention over time. Parents of all classes indicated satisfaction with the teaching and learning activities, and with the program format in which their child was enrolled. Excerpts from the teacher daybook and reflective journal demonstrated that communicative activities fostering student interaction in the target language were more frequently and readily implemented in the longer compact CF periods. Students generally stated a preference for the program format in which they were enrolled, although only students in the compact classes outlined pedagogic reasons in support for their preference. Additionally, most students self-assessed a higher level of language competence than in previous years, which students in the compact (experimental) classes attributed to the longer class periods, stating that they promoted task completion, group work, in-depth projects and communicative activities.
4

Supervised language models for temporal resolution of text in absence of explicit temporal cues

Kumar, Abhimanu 18 March 2014 (has links)
This thesis explores the temporal analysis of text using the implicit temporal cues present in document. We consider the case when all explicit temporal expressions such as specific dates or years are removed from the text and a bag of words based approach is used for timestamp prediction for the text. A set of gold standard text documents with times- tamps are used as the training set. We also predict time spans for Wikipedia biographies based on their text. We have training texts from 3800 BC to present day. We partition this timeline into equal sized chronons and build a probability histogram for a test document over this chronon sequence. The document is assigned to the chronon with the highest probability. We use 2 approaches: 1) a generative language model with Bayesian priors, and 2) a KL divergence based model. To counter the sparsity in the documents and chronons we use 3 different smoothing techniques across models. We use 3 diverse datasets to test our mod- els: 1) Wikipedia Biographies, 2) Guttenberg Short Stories, and 3) Wikipedia Years dataset. Our models are trained on a subset of Wikipedia biographies. We concentrate on two prediction tasks: 1) time-stamp prediction for a generic text or mid-span prediction for a Wikipedia biography , and 2) life-span prediction for a Wikipedia biography. We achieve an f-score of 81.1% for life-span prediction task and a mean error of around 36 years for mid-span prediction for biographies from present day to 3800 BC. The best model gives a mean error of 18 years for publication date prediction for short stories that are uniformly distributed in the range 1700 AD to 2010 AD. Our models exploit the temporal distribu- tion of text for associating time. Our error analysis reveals interesting properties about the models and datasets used. We try to combine explicit temporal cues extracted from the document with its implicit cues and obtain combined prediction model. We show that a combination of the date-based predictions and language model divergence predictions is highly effective for this task: our best model obtains an f-score of 81.1% and the median error between actual and predicted life span midpoints is 6 years. This would be one of the emphasis for our future work. The above analyses demonstrates that there are strong temporal cues within texts that can be exploited statistically for temporal predictions. We also create good benchmark datasets along the way for the research community to further explore this problem. / text
5

A Language-Model-Based Approach for Detecting Incompleteness in Natural-Language Requirements

Luitel, Dipeeka 24 May 2023 (has links)
[Context and motivation]: Incompleteness in natural-language requirements is a challenging problem. [Question/Problem]: A common technique for detecting incompleteness in requirements is checking the requirements against external sources. With the emergence of language models such as BERT, an interesting question is whether language models are useful external sources for finding potential incompleteness in requirements. [Principal ideas/results]: We mask words in requirements and have BERT's masked language model (MLM) generate contextualized predictions for filling the masked slots. We simulate incompleteness by withholding content from requirements and measure BERT's ability to predict terminology that is present in the withheld content but absent in the content disclosed to BERT. [Contributions]: BERT can be configured to generate multiple predictions per mask. Our first contribution is to determine how many predictions per mask is an optimal trade-off between effectively discovering omissions in requirements and the level of noise in the predictions. Our second contribution is devising a machine learning-based filter that post-processes predictions made by BERT to further reduce noise. We empirically evaluate our solution over 40 requirements specifications drawn from the PURE dataset [30]. Our results indicate that: (1) predictions made by BERT are highly effective at pinpointing terminology that is missing from requirements, and (2) our filter can substantially reduce noise from the predictions, thus making BERT a more compelling aid for improving completeness in requirements.
6

Identifying High Acute Care Users Among Bipolar and Schizophrenia Patients

Shuo Li (17499660) 03 January 2024 (has links)
<p dir="ltr">The electronic health record (EHR) documents the patient’s medical history, with information such as demographics, diagnostic history, procedures, laboratory tests, and observations made by healthcare providers. This source of information can help support preventive health care and management. The present thesis explores the potential for EHR-driven models to predict acute care utilization (ACU) which is defined as visits to an emergency department (ED) or inpatient hospitalization (IH). ACU care is often associated with significant costs compared to outpatient visits. Identifying patients at risk can improve the quality of care for patients and can reduce the need for these services making healthcare organizations more cost-effective. This is important for vulnerable patients including those suffering from schizophrenia and bipolar disorders. This study compares the ability of the MedBERT architecture, the MedBERT+ architecture and standard machine learning models to identify at risk patients. MedBERT is a deep learning language model which was trained on diagnosis codes to predict the patient’s at risk for certain disease conditions. MedBERT+, the architecture introduced in this study is also trained on diagnosis codes. However, it adds socio-demographic embeddings and targets a different outcome, namely ACU. MedBERT+ outperformed the original architecture, MedBERT, as well as XGB achieving an AUC of 0.71 for both bipolar and schizophrenia patients when predicting ED visits and an AUC of 0.72 for bipolar patients when predicting IH visits. For schizophrenia patients, the IH predictive model had an AUC of 0.66 requiring further improvements. One potential direction for future improvement is the encoding of the demographic variables. Preliminary results indicate that an appropriate encoding of the age of the patient increased the AUC of Bipolar ED models to up to 0.78.</p>
7

Leveraging Transformer Models and Elasticsearch to Help Prevent and Manage Diabetes through EFT Cues

Shah, Aditya Ashishkumar 16 June 2023 (has links)
Diabetes in humans is a long-term (chronic) illness that affects how our body converts food into energy. Approximately one in ten individuals residing in the United States is affected with diabetes and more than 90% of those have type 2 diabetes (T2D). Human bodies fail to produce insulin in type 1 diabetes, causing you to take insulin for survival. However, with type 2 diabetes, the body can't use insulin well. A proven way to manage diabetes is through a positive mindset and a healthy lifestyle. Several studies have been conducted at Virginia Tech and the University of Buffalo on discovering different helpful characteristics in a person's day-to-day life, which relate to important events. They consider Episodic Fu- ture Thinking (EFT), where participants identify several events/actions that might occur at multiple future time frames (1 month to 10 years) in text-based descriptions (cues). This re- search aims to detect content characteristics from these EFT cues. However, class imbalance often presents a challenging issue when dealing with such domain-specific data. To mitigate this issue, this research employs Elasticsearch to address data imbalance and enhance the machine learning (ML) pipeline for improved accuracy of predictions. By leveraging Elas- ticsearch and transformer models, this study constructs classifiers and regression models, which can be utilized to identify various content characteristics from the cues. To the best of our knowledge, this work represents the first such attempt to employ natural language processing (NLP) techniques to analyze EFT cues and establish a correlation between those characteristics and their impacts on decision-making and health outcomes. / Master of Science / Diabetes is a serious and long-term illness that impacts how the body converts food into energy. It affects around one in ten individuals residing in the United States, and over 90% of these individuals have type 2 diabetes (T2D). While a positive attitude and healthy lifestyle can help with management of diabetes, it is unclear exactly which mental attitudes most affect health outcomes. To gain a better understanding of this relationship, researchers from Virginia Tech and the University of Buffalo conducted multiple studies on Episodic Future Thinking (EFT), where participants identify several events or actions that could take place in the future. This research uses natural language processing (NLP) to analyze the descriptions of these events (cues) and identify different characteristics that relate to a person's day-to-day life. With the help of Elasticsearch and transformer models, this work handles the data imbalance and improves the model predictions for different categories within cues. Overall, this research has the potential to provide valuable insights that can impact their diabetes risk, potentially leading to better management and prevention strategies and treatments.
8

Generative Language Models for Automated Programming Feedback

Hedberg Segeholm, Lea, Gustafsson, Erik January 2023 (has links)
In recent years, Generative Language Models have exploded into the mainstream with household names like BERT and ChatGPT, proving that text generation could have the potential to solve a variety of tasks. As the number of students enrolled into programming classes has increased significantly, providing adequate feedback for everyone has become a pressing logistical issue. In this work, we evaluate the ability of near state-of-the-art Generative Language Models to provide said feedback on an automated basis. Our results show that the latest publicly available model GPT-3.5 has a significant aptitude for finding errors in code while the older GPT-3 is noticeably more uneven in its analysis. It is our hope that future, potentially fine-tuned models could help fill the role of providing early feedback for beginners, thus significantly alleviating the pressure put upon instructors.
9

Distillation or loss of information? : The effects of distillation on model redundancy

Sventickaite, Eva Elzbieta January 2022 (has links)
The necessity for billions of parameters in large language models has lately been questioned as there are still unanswered questions regarding how information is captured in the networks. It could be argued that without this knowledge, there may be a tendency to overparametarize the models. In turn, the investigation of model redundancy and the methods which minimize it is important both to the academic and commercial entities. As such, the two main goals of this project were to, firstly, discover whether one of such methods, namely, distillation, reduces the redundancy of the language models without losing linguistic capabilities and, secondly, to determine whether the model architecture or multilingualism has a bigger effect on said reduction. To do so, ten models, both monolingual, multilingual, and their distilled counterparts, were evaluated layer and neuron-wise. In terms of layers, we have evaluated the layer correlation of all models by visualising heatmaps and calculating the average per layer similarity. For establishing the neuron-level redundancy, a classifier probe was applied on the model neurons, both the whole model and reduced by applying a clustering algorithm, and its performance was assessed for two tasks, Part-of-Speech (POS) and Dependency (DEP) tagging. To determine the distillation effects on the multilingualism of the models, we have investigated cross-lingual transfer for the same tasks and compared the results of the classifier as applied on multilingual models and one distilled variant in ten languages, nine Indo-European and one non-Indo-European. The results show that distillation reduces the number of redundant neurons at the cost of losing some of the linguistic knowledge. In addition, the redundancy in the distilled models is mainly attributed to the architecture on which it is based, with the multilingualism aspect having only a mild impact. Finally, the cross-lingual transfer experiments have shown that after distillation the model loses the ability to capture some languages more than others. In turn, the outcome of the project suggests that distillation could be applied to reduce the size of billion parameter models and is a promising method in terms of reducing the redundancy in current language models.
10

Bean Soup Translation: Flexible, Linguistically-motivated Syntax for Machine Translation

Mehay, Dennis Nolan 30 August 2012 (has links)
No description available.

Page generated in 0.0708 seconds