• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 40
  • 37
  • 29
  • 19
  • 19
  • 17
  • 15
  • 14
  • 12
  • 9
  • 9
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Colonial lineage and cultural fusion family identity and progressive design in the Kingscote dining room /

Emery, Caitlin M. January 2009 (has links)
Thesis (M.A.)--University of Delaware, 2009. / Principal faculty advisor: Brock W. Jobe, Winterthur Program in Early American Culture. Includes bibliographical references.
52

Big Data analytics for the forest industry : A proof-of-conceptbuilt on cloud technologies

Sellén, David January 2016 (has links)
Large amounts of data in various forms are generated at a fast pace in today´s society. This is commonly referred to as “Big Data”. Making use of Big Data has been increasingly important for both business and in research. The forest industry is generating big amounts of data during the different processes of forest harvesting. In Sweden, forest infor-mation is sent to SDC, the information hub for the Swedish forest industry. In 2014, SDC received reports on 75.5 million m3fub from harvester and forwarder machines. These machines use a global stand-ard called StanForD 2010 for communication and to create reports about harvested stems. The arrival of scalable cloud technologies that com-bines Big Data with machine learning makes it interesting to develop an application to analyze the large amounts of data produced by the forest industry. In this study, a proof-of-concept has been implemented to be able to analyze harvest production reports from the StanForD 2010 standard. The system consist of a back-end and front-end application and is built using cloud technologies such as Apache Spark and Ha-doop. System tests have proven that the concept is able to successfully handle storage, processing and machine learning on gigabytes of HPR files. It is capable of extracting information from raw HPR data into datasets and support a machine learning pipeline with pre-processing and K-Means clustering. The proof-of-concept has provided a code base for further development of a system that could be used to find valuable knowledge for the forest industry.
53

The appropriateness of selected subtests of the Stanford-Binet Intelligence Scale, Fourth Edition for hearing impaired children

Perley-McField, Jo-Anne January 1990 (has links)
This study proposed to evaluate the appropriateness of selected subtests of the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE) for use with severely to profoundly hearing impaired children. The subjects used in this study were enrolled in a residential/day school for the deaf whose educational methodology was Total Communication. The subjects were tested on both the SB:FE nonverbal selected subtests and the Performance Scale of the Wechsler Intelligence Scale for Children-Revised (WISC-R PIQ). To assess appropriateness, several procedures were employed comparing data gathered from the hearing impaired sample with data reported for the standardized population of the SB:FE. Correlations were computed between the WISC-R and the SB:FE and comparisons of the total composite scores for each measure were made to detect any systematic differences. The results indicated that the correlations reported for the hearing impaired sample are generally similar to the correlations reported for the standardized sample of the SB:FE. The analysis performed between the Area Scores of the SB:FE and the WISC-R PIQ to detect systematic differences revealed a difference of one standard deviation between these two instruments, with the. SB:FE results being lower than the WISC-R PIQ results. It was concluded that the selected subtests of the SB:FE and the WISC-R PIQ could not be used interchangeably. Further research into this area was advised before using this measure to estimate general cognitive ability for hearing impaired children whose levels of language development may be delayed. Further research was also encouraged to confirm the suggestion of greater predictive validity of the SB:FE with academic measures. It was suggested that these findings indicated that the use of language as a cognitive tool may be important in acquiring certain problem solving skills. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
54

ClarQue: Chatbot Recognizing Ambiguity in the Conversation and Asking Clarifying Questions

Mody, Shreeya Himanshu 31 July 2020 (has links)
Recognizing when we need more information and asking clarifying questions are integral to communication in our day to day life. It helps us complete our mental model of the world and eliminate confusion. Chatbots need this technique to meaningfully collaborate with humans. We have investigated a process to generate an automated system that mimics human communication behavior using knowledge graphs, weights, an ambiguity test, and a response generator. It can take input dialog text and based on the chatbot's knowledge about the world and the user it can decide if it has enough information or if it requires more. Based on that decision, the chatbot generates a dialog output text which can be an answer if a question is asked, a statement if there are no doubts or if there is any ambiguity, it generates a clarifying question. The effectiveness of these features has been backed up by an empirical study which suggests that they are very useful in a chatbot not only for crucial information retrial but also for keeping the flow and context of the conversation intact.
55

Translation Memory System Optimization : How to effectively implement translation memory system optimization / Optimering av översättningsminnessystem : Hur man effektivt implementerar en optimering i översättningsminnessystem

Chau, Ting-Hey January 2015 (has links)
Translation of technical manuals is expensive, especially when a larger company needs to publish manuals for their whole product range in over 20 different languages. When a text segment (i.e. a phrase, sentence or paragraph) is manually translated, we would like to reuse these translated segments in future translation tasks. A translated segment is stored with its corresponding source language, often called a language pair in a Translation Memory System. A language pair in a Translation Memory represents a Translation Entry also known as a Translation Unit. During a translation, when a text segment in a source document matches a segment in the Translation Memory, available target languages in the Translation Unit will not require a human translation. The previously translated segment can be inserted into the target document. Such functionality is provided in the single source publishing software, Skribenta developed by Excosoft. Skribenta requires text segments in source documents to find an exact or a full match in the Translation Memory, in order to apply a translation to a target language. A full match can only be achieved if a source segment is stored in a standardized form, which requires manual tagging of entities, and often reoccurring words such as model names and product numbers. This thesis investigates different ways to improve and optimize a Translation Memory System. One way was to aid users with the work of manual tagging of entities, by developing Heuristic algorithms to approach the problem of Named Entity Recognition (NER). The evaluation results from the developed Heuristic algorithms were compared with the result from an off the shelf NER tool developed by Stanford. The results shows that the developed Heuristic algorithms is able to achieve a higher F-Measure compare to the Stanford NER, and may be a great initial step to aid Excosofts’ users to improve their Translation Memories. / Översättning av tekniska manualer är väldigt kostsamt, speciellt när större organisationer behöver publicera produktmanualer för hela deras utbud till över 20 olika språk. När en text (t.ex. en fras, mening, paragraf) har blivit översatt så vill vi kunna återanvända den översatta texten i framtida översättningsprojekt och dokument. De översatta texterna lagras i ett översättningsminne (Translation Memory). Varje text lagras i sitt källspråk tillsammans med dess översättning på ett annat språk, så kallat målspråk. Dessa utgör då ett språkpar i ett översättningsminnessystem (Translation Memory System). Ett språkpar som lagras i ett översättningsminne utgör en Translation Entry även kallat Translation Unit. Om man hittar en matchning när man söker på källspråket efter en given textsträng i översättningsminnet, får man upp översättningar på alla möjliga målspråk för den givna textsträngen. Dessa kan i sin tur sättas in i måldokumentet. En sådan funktionalitet erbjuds i publicerings programvaran Skribenta, som har utvecklats av Excosoft. För att utföra en översättning till ett målspråk kräver Skribenta att text i källspråket hittar en exakt matchning eller en s.k. full match i översättningsminnet. En full match kan bara uppnås om en text finns lagrad i standardform. Detta kräver manuell taggning av entiteter och ofta förekommande ord som modellnamn och produktnummer. I denna uppsats undersöker jag hur man effektivt implementerar en optimering i ett översättningsminnessystem, bland annat genom att underlätta den manuella taggningen av entitier. Detta har gjorts genom olika Heuristiker som angriper problemet med Named Entity Recognition (NER). Resultat från de utvecklade Heuristikerna har jämförts med resultatet från det NER-verktyg som har utvecklats av Stanford. Resultaten visar att de Heuristiker som jag utvecklat uppnår ett högre F-Measure jämfört med Stanford NER och kan därför vara ett bra inledande steg för att hjälpa Excosofts användare att förbättra deras översättningsminnen.
56

The Effects of Serial Testing Upon the Results of the Standford-Binet Tests of Intelligence

McCullough, Betsey Rogers 01 May 1948 (has links)
The revised Stanford-Binet Intelligence Scale probably is the best instrument we now have for measuring the general intelligence of young people. Using this scale we can foretell to a large degree a child's future mental growth; and with this knowledge as an important part of the total knowledge needed for prediction, we can more scientifically plan his further education and his vocational choice.
57

The perceptions and experiences of principals in increasing student achievement on the Stanford 9 Achievement Test at the middle school level

Long, Shelly Danette 01 January 2002 (has links) (PDF)
The purposes of the study were to: (a) examine the perceptions and experiences of principals in increasing student achievement on the SAT 9 at the middle school level; (b) identify how middle school principals, whose schools had met student achievement growth targets and were eligible for the Governor's Performance Awards Program had influenced the growth process at their school sites; and (c) determine whether or not principals were using any specific document, person, program, or other source, to guide their efforts. This was a series of multiple case studies that utilized interviewing as its primary method of collecting data. The researcher interviewed middle school principals whose schools had met their 2000 growth targets on the SAT 9 and were eligible for the Governor's Performance Awards Program. A discussion of the results emphasized the findings of the study. This study found that in the environment of standards-based reform, educators must identify the strategies and techniques that will increase student achievement. To accomplish this, principals must lead their staff to discover and implement researched-based, best practices, and accompany these new skills with ongoing professional development. Likewise, they must institute a data driven evaluation process of all that occurs at their school sites. This evaluation process should be based on the academic growth of all students. Through the use of multiple measures, such as benchmark assessments and SAT 9 scores, the data on student growth, in all areas of school life, should be collected, monitored, and evaluated over time. The results of this data should then be analyzed to identify areas of strength and weakness and to guide curriculum, instruction, intervention, and other changes at the school site. The relationship between this study and prior research was also included in the discussion section. This study provided school administrators with recommendations on how to increase student achievement on the SAT 9 at the middle school level. Suggestions for additional research were also offered.
58

Le rôle de la mémoire de travail telle que conçue par Baddeley dans les quatre sous-tests de l'échelle de Stanford-Binet

Dumont, Jilda 12 November 2021 (has links)
Cette étude s'inscrit dans le courant de recherche de la psychologie cognitive différentielle. Ce courant s'intéresse plus particulièrement aux différences individuelles observées dans les tests classiques d'intelligence en utilisant l'approche du traitement de l'information. De façon générale ce mémoire s'intéresse aux possibilités appliquées du modèle de mémoire de travail de Baddeley (1986). L'objectif plus spécifique est d'étudier le rôle de deux composantes de ce modèle soit la boucle articulatoire et la tablette visuo-spatiale dans les sous-tests de l'échelle de mémoire à court terme du Stanford-Binet, quatrième édition. La méthodologie utilisée est celle de la charge mnémonique (memory loading). Il s'agit de comparer la performance obtenue dans le rappel sériel différé de deux types de tâches (spatiales et verbales) en employant comme tâches interférentes les items de l'échelle de mémoire. Il est postulé que la boucle articulatoire est impliquée dans tous les sous-tests. Cependant, un de ces sous-tests sollicite possiblement les ressources de la tablette visuo-spatiale. Un protocole intra-sujet de type quasi-expérimental est utilisé avec seize sujets étudiants universitaires. Les résultats observés ne démontrent aucune interaction significative de la charge mnémonique avec les tâches interférentes. Cependant, pour l'ensemble des conditions, le pourcentage d'erreurs au rappel de la tâche verbale est plus élevé que celui qui est observé pour la tâche visuo-spatiale. Aussi, les sous tests montrent un effet indifférencié aux deux types de tâches.
59

Comparison of results obtained from the Wechsler-Bellevue vocabulary test with those from the Stanford-Binet vocabulary, using a population of normal subjects and mental patients.

Tagiuri, Renato. January 1946 (has links)
No description available.
60

A Feasibility Study of the Likelihood of Use of the Spanish Version of Stanford's Chronic Disease Self-Management Program (CDSMP) by the Ohio Hispanic Population

Chahal, Jasleen Kaur 09 August 2010 (has links)
No description available.

Page generated in 0.0428 seconds