Spelling suggestions: "subject:"text."" "subject:"next.""
251 |
Högläsning som pedagogiskt redskap : Fem lärares tankar och erfarenheterAnderson, Caroline, Andersson, Erika January 2012 (has links)
Syftet med vår studie var att undersöka några lärares uppfattningar och erfarenheter av högläsning för eleverna, som pedagogiskt redskap. De frågeställningar vi utgick från var följande: I vilka sammanhang används högläsning? Vilka texter används för högläsning? Bearbetas gemensamma textupplevelser utifrån högläsning i syfte att utveckla läsförståelse och i sådana fall hur? Vi har använt en kvalitativ metod, där vi genomfört både observationer och intervjuer. Kvalitativ metod valdes eftersom vi ville studera och intervjua varje lärare utifrån sin miljö. Vi valde att använda oss av både observation och intervju eftersom vi ville skapa oss en fördjupad förståelse av hur lärarna arbetar med högläsning. Vi har i studien utgått från en sociokulturell teori som innebär att lärande sker i samspel med andra och att varje individs lärande är beroende av det sammanhang som han eller hon ingår i. Resultatet visar att samtliga deltagande lärare ser högläsningen som en stund för avkoppling, där eleverna får möjlighet att koppla av mellan de övriga skolaktiviteterna. Samtliga lärare menade att högläsningsstunden har ett pedagogiskt värde, där elevernas läsförståelse och läsintresse kan utvecklas. Samtidigt visar resultatet att de texter som läraren läst högt för eleverna sällan bearbetas och att tiden för högläsningen är knapp.
|
252 |
The illusion of creation of the textRahman, Md. Shaifur January 2005 (has links)
Is it really possible to create a (literary) text? It is actually impossible as to create an authentic text we need an authentic context which is impossible to create. So all the literary txets we have are not authentic and are not created authentically.
|
253 |
Supply chain design: a conceptual model and tactical simulationsBrann, Jeremy Matthew 15 May 2009 (has links)
In current research literature, supply chain management (SCM) is a hot topic
breaching the boundaries of many academic disciplines. SCM-related work can be
found in the relevant literature for many disciplines. Supply chain management can be
defined as effectively and efficiently managing the flows (information, financial and
physical) in all stages of the supply chain to add value to end customers and gain profit
for all firms in the chain. Supply chains involve multiple partners with the common goal
to satisfy customer demand at a profit.
While supply chains are not new, the way academics and practitioners view the
need for and the means to manage these chains is relatively new. Very little literature
can be found on designing supply chains from the ground up or what dimensions of
supply chain management should be considered when designing a supply chain.
Additionally, we have found that very few tools exist to help during the design phase of
a supply chain. Moreover, very few tools exist that allow for comparing supply chain
designs.
We contribute to the current literature by determining which supply chain
management dimensions should be considered during the design process. We employ
text mining to create a supply chain design conceptual model and compare this model to existing supply chain models and reference frameworks. We continue to contribute to
the current SCM literature by applying a creative application of concepts and results in
the field of Stochastic Processes to build a custom simulator capable of comparing
different supply chain designs and providing insights into how the different designs
affect the supply chain’s total inventory cost. The simulator provides a mechanism for
testing when real-time demand information is more beneficial than using first-come,
first-serve (FCFS) order processing when the distributional form of lead-time demand is
derived from the supply chain operating characteristics instead of using the assumption
that lead-time demand distributions are known. We find that in many instances FCFS
out-performs the use of real-time information in providing the lowest total inventory
cost.
|
254 |
Incident Data Analysis Using Data Mining TechniquesVeltman, Lisa M. 16 January 2010 (has links)
There are several databases collecting information on various types of incidents, and
most analyses performed on these databases usually do not expand past basic trend
analysis or counting occurrences. This research uses the more robust methods of data
mining and text mining to analyze the Hazardous Substances Emergency Events
Surveillance (HSEES) system data by identifying relationships among variables,
predicting the occurrence of injuries, and assessing the value added by the text data. The
benefits of performing a thorough analysis of past incidents include better understanding
of safety performance, better understanding of how to focus efforts to reduce incidents,
and a better understanding of how people are affected by these incidents.
The results of this research showed that visually exploring the data via bar graphs did not
yield any noticeable patterns. Clustering the data identified groupings of categories
across the variable inputs such as manufacturing events resulting from intentional acts
like system startup and shutdown, performing maintenance, and improper dumping.
Text mining the data allowed for clustering the events and further description of the data,
however, these events were not noticeably distinct and drawing conclusions based on
these clusters was limited. Inclusion of the text comments to the overall analysis of
HSEES data greatly improved the predictive power of the models. Interpretation of the
textual data?s contribution was limited, however, the qualitative conclusions drawn were
similar to the model without textual data input. Although HSEES data is collected to
describe the effects hazardous substance releases/threatened releases have on people, a
fairly good predictive model was still obtained from the few variables identified as cause
related.
|
255 |
Feature Translation-based Multilingual Document Clustering TechniqueLiao, Shan-Yu 08 August 2006 (has links)
Document clustering automatically organizes a document collection into distinct groups of similar documents on the basis of their contents. Most of existing document clustering techniques deal with monolingual documents (i.e., documents written in one language). However, with the trend of globalization and advances in Internet technology, an organization or individual often generates/acquires and subsequently archives documents in different languages, thus creating the need for multilingual document clustering (MLDC). Motivated by its significance and need, this study designs a translation-based MLDC technique. Our empirical evaluation results show that the proposed multilingual document clustering technique achieves satisfactory clustering effectiveness measured by both cluster recall and cluster precision.
|
256 |
Summary-based document categorization with LSILiu, Hsiao-Wen 14 February 2007 (has links)
Text categorization to automatically assign documents into the appropriate pre-defined category or categories is essential to facilitating the retrieval of desired documents efficiently and effectively from a huge text depository, e.g., the world-wide web. Most techniques, however, suffer from the feature selection problem and the vocabulary mismatch problem. A few research works have addressed on text categorization via text summarization to reduce the size of documents, and consequently the number of features to consider, while some proposed using latent semantic indexing (LSI) to reveal the true meaning of a term via its association with other terms. Few works, however, have studied the joint effect of text summarization and the semantic dimension reduction technique in the literature. The objective of this research is thus to propose a practical approach, SBDR to deal with the above difficulties in text categorization tasks.
Two experiments are conducted to validate our proposed approach. In the first experiment, the results show that text summarization does improve the performance in categorization. In addition, to construct important sentences, the association terms of both noun-noun and noun-verb pairs should be considered. Results of the second experiment indicate slight better performance with the approach of adopting LSI exclusively (i.e. no summarization) than that with SBDR (i.e. with summarization). Nonetheless, the minor accuracy reduction can be largely compensated for the computational time saved using LSI with text summarized. The feasibility of the SBDR approach is thus justified.
|
257 |
Mining-Based Category Evolution for Text DatabasesDong, Yuan-Xin 18 July 2000 (has links)
As text repositories grow in number and size and global connectivity improves, the amount of online information in the form of free-format text is growing extremely rapidly. In many large organizations, huge volumes of textual information are created and maintained, and there is a pressing need to support efficient and effective information retrieval, filtering, and management. Text categorization is essential to the efficient management and retrieval of documents. Past research on text categorization mainly focused on developing or adopting statistical classification or inductive learning methods for automatically discovering text categorization patterns from a training set of manually categorized documents. However, as documents accumulate, the pre-defined categories may not capture the characteristics of the documents. In this study, we proposed a mining-based category evolution (MiCE) technique to adjust the categories based on the existing categories and their associated documents. According to the empirical evaluation results, the proposed technique, MiCE, was more effective than the discovery-based category management approach, insensitive to the quality of original categories, and capable of improving classification accuracy.
|
258 |
Integrating Knowledge Maps From Distributed Document RepositoriesYan, Ming-De 14 July 2003 (has links)
In this thesis, we propose a knowledge map integration system to merge distributed knowledge maps into a global knowledge map based on the concept mapping methodology. This system performs the functions of knowledge map integration and knowledge map maintenance. The knowledge map integration function integrates different local knowledge maps specified by distributed organizations into a global knowledge map, and knowledge seekers can access the overall knowledge structure about the domain knowledge. Besides, the local knowledge maps in different organizations vary dynamically due to accumulation of information. Consequently, the demand for knowledge map maintenance increases in order to keep the global knowledge map up to date. The function of knowledge map maintenance can update the variations of every local knowledge map, and change the global structure simultaneously. The knowledge map integration system is evaluated by master thesis repository at National Central Library, and we obtain good results.
|
259 |
Use of Text Summarization for Supporting Event DetectionWu, Pao-Feng 12 August 2003 (has links)
Environmental scanning, which acquires and use the information about event, trends, and changes in an organization¡¦s external environment, is an important process in the strategic management of an organization and permits the organization to quickly adapt to the changes of its external environment. Event detection that detects the onset of new events from news documents is essential to facilitating an organization¡¦s environmental scanning activity. However, traditional feature-based event detection techniques detect events by comparing the similarity between features of news stories and incur several problems. For example, for illustration and comparison purpose, a news story may contain sentences or paragraphs that are not highly relevant to defining its event. Without removing such less relevant sentences or paragraphs before detection, the effectiveness of traditional event detection techniques may suffer. In this study, we developed a summary-based event detection (SED) technique that filters less relevant sentences or paragraphs in a news story before performing feature-based event detection. Using a traditional feature-based event detection technique (i.e., INCR) as benchmark, the empirical evaluation results showed that the proposed SED technique could achieve comparable or even better detection effectiveness (measured by miss and false alarm rates) than the INCR technique, for data corpora where the percentage of news stories discussing
old events is high.
|
260 |
Construction Gene Relation Network Using Text Mining and Bayesian NetworkChen, Shu-fen 11 September 2007 (has links)
In the organism, genes don¡¦t work independently. The interaction of genes shows how the functional task affects. Observing the interaction can understand what the relation between genes and how the disease caused. Several methods are adopted to observe the interaction to construct gene relation network. Existing algorithms to construct gene relation network can be classified into two types. One is to use literatures to extract the relation between genes. The other is to construct the network, but the relations between genes are not described. In this thesis, we proposed a hybrid method based on these two methods. Bayesian network is applied to the microarray gene expression data to construct gene network. Text mining is used to extract the gene relations from the documents database. The proposed algorithm integrates gene network and gene relations into gene relation networks. Experimental results show that the related genes are connected in the network. Besides, the relations are also marked on the links of the related genes.
|
Page generated in 0.0297 seconds