• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 9
  • 6
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 114
  • 18
  • 17
  • 16
  • 14
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Att läsa skönlitteratur : Gymnasieelevers arbete med att utveckla litterär kompetens

Magnusson, Petra January 2008 (has links)
This study is based on my work as a teacher of Swedish language and literature to upper secondary school students, and my interest in developing the teaching of literature by using both new criticism theories and theories of literary perception focusing on reader response. The purpose of the study is to examine the learning process in trying to develop literary competence and to find out how to describe the competence shown by the students. My starting-point is a discussion about literary competence and the theory and model of literary competence by Örjan Torell. In examining literary competence, by how students use their constitutional competence as well as performance and literary transfer competences, Torell´s model shows the possibility and the need for both knowledge of literature and personal reception by focusing on the dialogue between author and reader. The method is qualitative and the empirics are material collected in one senior high school class in the Social Science programme during their second and third year. The material contains written reading logs, written answers to tasks and transcribed recordings of discussions of literature. The analysis shows a variety among the literary competence shown and developed by the students and points to the complexity of understanding the concept of literary competence. The model proves to be useful in focusing the teaching and learning of literature and the results indicate that teaching of literature by combining learning of epochs, genres, ideas and conceptions with personal reading is a valuable approach. Keywords: literary competence, literary teaching, literary reception, discussions of literature, reading logs, literary repertoire, text competence.
32

Understanding a large-scale IPTV network via system logs

Qiu, Tongqing 08 July 2011 (has links)
Recently, there has been a global trend among the telecommunication industry on the rapid deployment of IPTV (Internet Protocol Television) infrastructure and services. While the industry rushes into the IPTV era, the comprehensive understanding of the status and dynamics of IPTV network lags behind. Filling this gap requires in-depth analysis of large amounts of measurement data across the IPTV network. One type of the data of particular interest is device or system log, which has not been systematically studied before. In this dissertation, we will explore the possibility of utilizing system logs to serve a wide range of IPTV network management purposes including health monitoring, troubleshooting and performance evaluation, etc. In particular, we develop a tool to convert raw router syslogs to meaningful network events. In addition, by analyzing set-top box (STB) logs, we propose a series of models to capture both channel popularity and dynamics, and users' activity on the IPTV network.
33

Habituation towards environmental enrichment in captive bears and its effect on stereotypic behaviours.

Anderson, Claes January 2008 (has links)
<p>The benefits gained by the presentation of environmental enrichment (EE) to captive animals are widely recognized. Few studies have, however, studied how to maximize the effect of EE. Repeated presentations of EE may cause a reduced interest towards the EE device, called habituation. To study the effect of habituation towards EE, behavioural data from 14 captive Sloth bears (<em>Melursus ursinus</em>) were collected during two different EE treatments. In treatment one, honey logs were presented for five consecutive days (ConsEE). In treatment two, the logs were presented every alternative day for five days (AltEE). The different treatments both showed a significant effect on responsiveness toward the EE, however, leaving gap days inbetween presentations in AltEE showed no reliable reduction in habituation. Both treatments significantly reduced stereotypies, however, only ConsEE reduced levels of stereotypies long term. Explorative behaviours, which are the most prominent behaviours in the wild, increased during both treatments. This is consistent with previous findings (Fischbacher & Schmid 1999, Grandia et al. 2001) that EE increases natural behaviours, which has been desribed as an indication of improved welfare (Carlstead et al. 1991 etc.). Other behavioural categories such as social and passiva behaviours were unaffected by the EE presentations. THe results show that it is possible to increase the effectiveness of EE by simple means in order to ensure animal welfare.</p>
34

Att läsa skönlitteratur : Gymnasieelevers arbete med att utveckla litterär kompetens

Magnusson, Petra January 2008 (has links)
<p>This study is based on my work as a teacher of Swedish language and literature to upper secondary school students, and my interest in developing the teaching of literature by using both new criticism theories and theories of literary perception focusing on reader response. The purpose of the study is to examine the learning process in trying to develop literary competence and to find out how to describe the competence shown by the students. My starting-point is a discussion about literary competence and the theory and model of literary competence by Örjan Torell. In examining literary competence, by how students use their constitutional competence as well as performance and literary transfer competences, Torell´s model shows the possibility and the need for both knowledge of literature and personal reception by focusing on the dialogue between author and reader.</p><p>The method is qualitative and the empirics are material collected in one senior high school class in the Social Science programme during their second and third year. The material contains written reading logs, written answers to tasks and transcribed recordings of discussions of literature.</p><p>The analysis shows a variety among the literary competence shown and developed by the students and points to the complexity of understanding the concept of literary competence. The model proves to be useful in focusing the teaching and learning of literature and the results indicate that teaching of literature by combining learning of epochs, genres, ideas and conceptions with personal reading is a valuable approach.</p><p>Keywords: literary competence, literary teaching, literary reception, discussions of literature, reading logs, literary repertoire, text competence.</p>
35

Time performance comparison in determining the weak parts in wooden logs

Akhtar, Naeem, Alzghoul, Ahmad January 2009 (has links)
<p>The steadily increasing demand of wood forces sawmills to increase the efficiency and effectiveness of their equipments. The weak parts and the twist in wooden logs have been documented as the most common and crucial defect in sawn lumber.</p><p>In this thesis we are going to implement a program which is able to determine the weak parts in wooden logs. The implementation will be in two languages C++ and Matlab. Parts of the program are going to be implemented sometimes by C++ and sometimes by Matlab therefore different designs are going to be tested. The aim of this thesis is to check whether these designs will meet the real time bound of 10 m/s.</p><p>The result shows that there is a huge difference in time performance for the different designs. Therefore, different discretization levels were used in order to meet the deadline of 10m/s. We found that in order to get better speed one should calculate the matrix for the function <em>F </em>and the Jacobian function <em>J </em>by using C++ not by Matlab. Also we found that when we called functions from one language to another an extra time is added.</p>
36

Sand distribution along shelf-edge deltaic systems : a case study from eastern offshore Trinidad

Davila-Chacon, Anmar Carolina 15 February 2011 (has links)
The study area is situated along the obliquely converging boundary of the Caribbean and South American plates offshore eastern offshore Trinidad. Major structural elements in the shelf break and deep-water slope regions include normal and counter-normal faults to the south and large transpressional fault zones to the north. Well logs and biostratigraphic information were analyzed for twenty-four wells in the study area to refine previous depositional environment interpretations. For purposes of this net sand distribution analysis it was decided to consider the deltaic portion of the shelf transit cycle, against the marine portion of the shelf transit cycle and were named T and R cycles, respectively. T and R cycles were interpreted based on well log patterns and depositional facies shifts. Six T/R cycles were interpreted within the Pliocene to recent stratigraphic succession and shelf edge trajectories were also mapped for each of these cycles based on earlier stratigraphic correlations. Net-to-gross (NTG) ratios were calculated for each component of the T/R cycles and plotted against total thicknesses and net sand values. In addition, NTG trends were mapped for each interval and analyzed based on their proximity to the corresponding shelf edge. Mapping of the shelf edge trajectories (SET) revealed that (1) SET migrate northeasterly across the Columbus Basin through time and (2) shelf edge orientations are parallel to the strike of growth faults in the south but deflect to the northeast near the Darien Ridge indicating a strong underlying structural control. The NTG plots and maps also revealed that (1) For T cycles, NTG values never exceed 60% and are inversely proportional to total thickness, (2) For R cycles, NTG values are highly variably ranging from 35% to 90%, (3) NTG values increase as the shelf break is approached and (4) The distribution of NTG ratios is also controlled by accommodation space created by local structures. The Guiana current is believed to play an important role in the redistribution and reworking of sand in the Columbus Basin. Aggradation and progradation distances were computed for each interval and the results suggest that the younger Sequences C2 (T-R cycle E) and C3 (T-R cycle F) show a stronger progradational trend than the older C4, C5 and C6. This strong progradational trend might indicate delivery of sand basinwards, while for the older intervals; the aggradational trend suggests an increase in sediment storage. In long-term scale (1-2 m.y.) the Orinoco Delta seems to behave as an aggradational delta that increases sediment storage due to growth fault and high subsidence rates. However, in the short-term scale, the Orinoco delta seems to behave as a rapid progradational delta, for the younger sequences C2 and C3, where sediment bypass is more likely to occur; and as a rapid aggradational (slow prograding) margin for the older intervals C4, C5 and C6. / text
37

Techniques and Tools for Mining Pre-Deployment Testing Data

Chan, BRIAN 17 September 2009 (has links)
Pre-deployment field testing in is the process of testing software to uncover unforeseen problems before it is released in the market. It is commonly conducted by recruiting users to experiment with the software in as natural setting as possible. Information regarding the software is then sent to the developers as logs. Log data helps developers fix bugs and better understand the user behaviors so they can refine functionality to user needs. More importantly, logs contain specific problems as well as call traces that can be used by developers to trace its origins. However, developers focus their analysis on post-deployment data such as bug reports and CVS data to resolve problems, which has the disadvantage of releasing software before it can be optimized. Therefore, more techniques are needed to harness field testing data to reduce post deployment problems. We propose techniques to process log data generated by users in order to resolve problems in the application before its deployment. We introduce a metric system to predict the user perceived quality in software if it were to be released into market in its current state. We also provide visualization techniques which can identify the state of problems and patterns of problem interaction with users that provide insight into solving the problems. The visualization techniques can also be extended to determine the point of origin of a problem, to resolve it more efficiently. Additionally, we devise a method to determine the priority of reported problems. The results generated from the case studies on mobile software applications. The metric results showed a strong ability predict the number of reported bugs in the software after its release. The visualization techniques uncovered problem patterns that provided insight to developers to the relationship between problems and users themselves. Our analysis on the characteristics of problems determined the highest priority problems and their distribution among users. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-16 17:50:31.094
38

AN APPROACH FOR IDENTIFYING SERVICE COMPOSITION PATTERNS FROM EXECUTION LOGS

Tang, Ran 30 September 2010 (has links)
Different types of web resources, such as Web Services, HTTP-based APIs and websites, can be located on the web to provide various services, such as information access and online banking. Such services are the basic building blocks to compose more complex functionality that cannot be achieved by a single service. Many service-oriented applications can be composed to fulfill similar functional requirements. Among various applications, a set of services can be frequently used together to deliver a unique functionality. Such set of services are called a service composition pattern. More specifically, a service composition pattern consists of a set of services and the control flow among the services. The reuse of the service composition patterns can facilitate the composition of new applications, improve existing applications and optimize maintenance process of services. To facilitate the identification and reuse of service composition patterns, we propose an approach that mines the service composition patterns from execution logs produced by service-oriented applications during runtime. Since the execution logs can be produced by heterogeneous web resources, we propose a unified description schema to describe various web resources in order to identify functionally similar services of different types. This helps reveal complete service composition patterns. Then we identify frequently associated services using Apriori algorithm and heuristics. Finally, we recover the control flow among the services using the event graph and process mining techniques. The effectiveness of the approach is evaluated through two case studies. The result shows that the unified description schema facilitates the identification of similar services of different types and our approach can effectively identify service composition patterns. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2010-09-29 18:08:07.55
39

Integrated approach to solving reservoir problems and evaluations using sequence stratigraphy, geological structures and diagenesis in Orange Basin, South Africa

Solomon Adeniyi Adekola January 2010 (has links)
<p>Sandstone and shale samples were selected within the systems tracts for laboratory analyses. The sidewall and core samples were subjected to petrographic thin section analysis, mineralogical analyses which include x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), and stable carbon and oxygen isotopes geochemistry to determine the diagenetic alteration at deposition and post deposition in the basin. The shale samples were subjected to Rock-Eval pyrolysis and accelerated solvent extraction (ASE) prior to gas chromatographic (GC) and gas chromatographic-mass spectrometric (GC-MS) analyses of the rock extracts, in order to determine the provenance, type and thermal maturity of organic matter present in sediments of the Orange Basin. The results revealed a complex diagenetic history of sandstones in this basin, which includes compaction, cementation/micritization, dissolution, silicification/overgrowth of quartz, and fracturing. The Eh-pH shows that the cements in the area of the basin under investigation were precipitated under weak acidic and slightly alkaline conditions. The &delta / 18O isotope values range from -1.648 to 10.054 %, -1.574 to 13.134 %, and -2.644 to 16.180 % in the LST, TST, and HST, respectively. While &delta / 13C isotope values range from -25.667 to -12.44 %, -27.862 to -6.954% and -27.407 to -19.935 % in the LST, TST, and HST, respectively. The plot of &delta / 18O versus &delta / 13C shows that the sediments were deposited in shallow marine temperate conditions.</p>
40

Time-based Workflow Mining

Canturk, Deniz 01 May 2005 (has links) (PDF)
Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, new techniques for discovering workflow models have been required. Starting point for such techniques are so-called &ldquo / workflow logs&quot / containing information about the workflow process as it is actually being executed. In this thesis, new mining technique based on time information is proposed. It is assumed that events in workflow logs bear timestamps. This information is used in to determine task orders and control flows between tasks. With this new algorithm, basic workflow structures, sequential, parallel, alternative and iterative (i.e., loops) routing, and advance workflow structure or-join can be mined. While mining the workflow structures, this algorithm also handles the noise problem.

Page generated in 0.0504 seconds