Spelling suggestions: "subject:"cofeatures"" "subject:"andfeatures""
231 |
Fundamental validity issues of an english as a foreign language test: a process-oriented approach to examining the reading construct as measured by the DR Congo English state examinationKatalayi, Godefroid Bantumbandi January 2014 (has links)
Doctor Educationis / The study aims to investigate the fundamental validity issues that can affect the DR Congo English state examination, a national exit test administered to high school final year students for certification. The study aspires to generate an understanding of the potential issues that affect the
construct validity of a test within the epistemological stance that supports a strong relationship between test construct and test context.
The study draws its theoretical underpinning from three theories: the validity theory that provides a theoretical ground necessary for understanding the quality of tests needed for assessing students’ reading abilities; the construction-integration theory that provides an understanding of how texts used in reading assessments are processed and understood by the examinees; and the strategic competence theory that explains how examinees deploy strategies to complete test tasks, and the extent to which these strategies tap into the reading construct.
Furthermore, the study proposes a reading model that signposts the social context of testing; therefore, conceptualizing reading as both a cognitive and a social process. As research design, the study adopts an exploratory design using both qualitative and quantitative data. Besides, the study uses protocol analysis and content analysis methodologies. While the former provides an understanding of the cognitive processes that mediate the reading construct and test performance so as to explore the different strategies examinees use to answer the English state examination (henceforth termed ESE) test questions, the latter examines the content of the different ESE papers so as to identify the different textual and item features that potentially impact on examinees’ performance on the ESE tasks. As instruments, the study uses a concurrent strategies questionnaire administered to 496 student-participants, a contextual
questionnaire administered to 26 student-participants, a contextual questionnaire administered to 27 teacher-participants, and eight tests administered to 496 student-participants. The findings indicate that, the ESE appears to be less appropriate to the ESE context as the majority of ESE test items target careful reading than expeditious reading; on the one hand, and reading at global level than to reading at local level; on the other hand. The findings also indicate that the ESE tasks hardly take account of the text structure and the underlined cognitive demands appropriate to the text types. Besides, the ESE fails to include other critical aspects of the reading construct. Finally, the findings also indicate that the ESE constructors may not be capable to construct an ESE with five functioning distractors as expected. Moreover, the inclusion of the implicit option 6 overlaps with the conceptual meaning of this option. The entire process of the present study has generated some insights that can advance our understanding of the construct validity of reading tests. These insights are: (a) the concept of validity is an evolving and context-dependent concept, (b) reading construct cannot be examined outside the actual context of reading activity, (c) elimination of distractors can sometimes be a construct-relevant strategy, (d) construct underrepresentation is a context-dependent concept, and (e) a reading test cannot be valid in all contexts. The suggested proposal for the improvement of the ESE requires the Congolese government through its Department of Education to (a) always conduct validation studies to justify the use of the ESE, (b) always consider the actual context of reading activity while developing the ESE, (c) revisit the meanings and interpretations of the ESE scores, (d) ensure the appropriateness of tasks
to be included in the ESE, (e) ensure the construct representativeness of the ESE tasks, (f) revisit the number of questions to be included in the ESE, (g) avoid bias in the ESE texts in order to ensure fairness, (h) diversify the genres of ESE texts, (i) ensure the coherence of ESE texts through the use of transitions and cohesive devices, (j) ensure that the order of test questions is in alignment with the order of text information, (k) revisit the structure and length of the texts to be included in the ESE, (l) revisit the number of alternatives to be included in the ESE, and (m) reconsider the use of the implicit alternative 6.
|
232 |
Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu / Recognizing the historical period of interpretation based on the music signal parameterizationKrál, Vítězslav January 2018 (has links)
The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
|
233 |
Rozpoznávání markantních rysů na nábojnicích / Recognition of Important Features on Weapon ShellsJanáček, Matej January 2010 (has links)
The text covers the automated recognition and comparison of features on used cartridge cases, in order to increase e ectivity of similar manual ballistic systems. The work is addresing the issue of programming the application for automated recognition and comparison of features on used cartridge cases.
|
234 |
Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge / Contributions to the Information Fusion : application to Obstacle Recognition in Visible and Infrared ImagesApatean, Anca Ioana 15 October 2010 (has links)
Afin de poursuivre et d'améliorer la tâche de détection qui est en cours à l'INSA, nous nous sommes concentrés sur la fusion des informations visibles et infrarouges du point de vue de reconnaissance des obstacles, ainsi distinguer entre les véhicules, les piétons, les cyclistes et les obstacles de fond. Les systèmes bimodaux ont été proposées pour fusionner l'information à différents niveaux: des caractéristiques, des noyaux SVM, ou de scores SVM. Ils ont été pondérés selon l'importance relative des capteurs modalité pour assurer l'adaptation (fixe ou dynamique) du système aux conditions environnementales. Pour évaluer la pertinence des caractéristiques, différentes méthodes de sélection ont été testés par un PPV, qui fut plus tard remplacée par un SVM. Une opération de recherche de modèle, réalisée par 10 fois validation croisée, fournit le noyau optimisé pour SVM. Les résultats ont prouvé que tous les systèmes bimodaux VIS-IR sont meilleurs que leurs correspondants monomodaux. / To continue and improve the detection task which is in progress at INSA laboratory, we focused on the fusion of the information provided by visible and infrared cameras from the view point of an Obstacle Recognition module, this discriminating between vehicles, pedestrians, cyclists and background obstacles. Bimodal systems have been proposed to fuse the information at different levels:of features, SVM's kernels, or SVM’s matching-scores. These were weighted according to the relative importance of the modality sensors to ensure the adaptation (fixed or dynamic) of the system to the environmental conditions. To evaluate the pertinence of the features, different features selection methods were tested by a KNN classifier, which was later replaced by a SVM. An operation of modelsearch, performed by 10 folds cross-validation, provides the optimized kernel for the SVM. The results have proven that all bimodal VIS-IR systems are better than their corresponding monomodal ones.
|
235 |
Alternative Approaches for the Registration of Terrestrial Laser Scanners Data using Linear/Planar FeaturesDewen Shi (9731966) 15 December 2020 (has links)
<p>Static terrestrial laser scanners have been increasingly used in three-dimensional data acquisition since it can rapidly provide accurate measurements with high resolution. Several scans from multiple viewpoints are necessary to achieve complete coverage of the surveyed objects due to occlusion and large object size. Therefore, in order to reconstruct three-dimensional models of the objects, the task of registration is required to transform several individual scans into a common reference frame. This thesis introduces three alternative approaches for the coarse registration of two adjacent scans, namely, feature-based approach, pseudo-conjugate point-based method, and closed-form solution. In the feature-based approach, linear and planar features in the overlapping area of adjacent scans are selected as registration primitives. The pseudo-conjugate point-based method utilizes non-corresponding points along common linear and planar features to estimate transformation parameters. The pseudo-conjugate point-based method is simpler than the feature-based approach since the partial derivatives are easier to compute. In the closed-form solution, a rotation matrix is first estimated by using a unit quaternion, which is a concise description of the rotation. Afterward, the translation parameters are estimated with non-corresponding points along the linear or planar features by using the pseudo-conjugate point-based method. Alternative approaches for fitting a line or plane to data with errors in three-dimensional space are investigated.</p><p><br></p><p>Experiments are conducted using simulated and real datasets to verify the effectiveness of the introduced registration procedures and feature fitting approaches. The proposed two approaches of line fitting are tested with simulated datasets. The results suggest that these two approaches can produce identical line parameters and variance-covariance matrix. The three registration approaches are tested with both simulated and real datasets. In the simulated datasets, all three registration approaches produced equivalent transformation parameters using linear or planar features. The comparison between the simulated linear and planar features shows that both features can produce equivalent registration results. In the real datasets, the three registration approaches using the linear or planar features also produced equivalent results. In addition, the results using real data indicates that the registration approaches using planar features produced better results than the approaches using linear features. The experiments show that the pseudo-conjugate point-based approach is easier to implement than the feature-based approach. The pseudo-conjugate point-based method and feature-based approach are nonlinear, so an initial guess of transformation parameters is required in these two approaches. Compared to the nonlinear approaches, the closed-form solution is linear and hence it can achieve the registration of two adjacent scans without the requirement of any initial guess for transformation parameters. Therefore, the pseudo-conjugate point-based method and closed-form solution are the preferred approaches for coarse registration using linear or planar features. In real practice, the planar features would have a better preference when compared to linear features since the linear features are derived indirectly by the intersection of neighboring planar features. To get enough lines with different orientations, planes that are far apart from each other have to be extrapolated to derive lines.</p><div><br></div>
|
236 |
Výuka výslovnosti angličtiny jako cizího jazyka / Pronunciation instruction in the context of TEFLNelson, Sabina January 2019 (has links)
Pronunciation instruction in the TEFL classroom has long been a neglected area regardless of its importance for the students. The data in the literature shows that teachers are generally not ready to provide pronunciation instruction for a variety of reasons: lack of qualification and training, theoretical and practical knowledge, time and motivation. The present thesis explores the current situation of pronunciation instruction at a private language school in the Czech Republic using of classroom observations and teacher and student surveys. The results confirm the initial hypothesis that pronunciation instruction including pronunciation error correction is nearly non-existent or occurs sporadically in the classroom. Only one out of four teachers (T1) included explicit pronunciation information into his teaching. The only pronunciation error correction technique observed with the four teachers was a recast which proved to be ineffective in most cases. Even though the teachers and students are generally aware of the importance of pronunciation in foreign language acquisition, their individual beliefs and attitudes towards pronunciation learning and teaching greatly differ. Key words: pronunciation, TEFL, explicit instruction, segmental features, suprasegmental features, teacher and student cognition
|
237 |
Formal Analysis of Variability-Intensive and Context-Sensitive SystemsChrszon, Philipp 29 January 2021 (has links)
With the widespread use of information systems in modern society comes a growing demand for customizable and adaptable software. As a result, systems are increasingly developed as families of products adapted to specific contexts and requirements. Features are an established concept to capture the commonalities and variability between system variants. Most prominently, the concept is applied in the design, modeling, analysis, and implementation of software product lines where products are built upon a common base and are distinguished by their features. While adaptations encapsulated within features are mainly static and remain part of the system after deployment,
dynamic adaptations become increasingly important. Especially interconnected mobile devices and embedded systems are required to be context-sensitive and (self-)adaptive. A promising concept for the design and implementation of such systems are roles as they capture context-dependent and collaboration-specific behavior.
A major challenge in the development of feature-oriented and role-based systems are interactions, i.e., emergent behavior that arises from the combination of multiple features or roles. As the number of possible combinations is usually exponential in the number of features and roles, the detection of such interactions is difficult. Since unintended interactions may compromise the functional correctness of a system and may lead to reduced efficiency or reliability, it is desirable to detect them as early as possible in the
development process.
The goal of this thesis is to adopt the concepts of features and roles in the formal modeling and analysis of systems and system families. In particular, the focus is on the quantitative analysis of operational models by means of probabilistic model checking for supporting the development process and for ensuring correctness.
The tool ProFeat, which enables a quantitative analysis of stochastic system families defined in terms of features, has been extended with additional language constructs, support for a one-by-one analysis of system variants, and a symbolic representation of analysis results. The implementation is evaluated by means of several case studies which compare different analysis approaches and show how ProFeat facilitates a family-based quantitative analysis of systems.
For the compositional modeling of role-based systems, role-based automata (RBA) are introduced. The thesis presents a modeling language that is based on the input language of the probabilistic model checker PRISM to compactly describe RBA. Accompanying tool support translates RBA models into the PRISM language to enable the formal analysis of functional and non-functional properties, including system dynamics, contextual changes, and interactions. Furthermore, an approach for a declarative and compositional definition of role coordinators based on the exogenous coordination language Reo is proposed. The adequacy of the RBA approach for detecting interactions within context-sensitive and adaptive systems is shown by several case studies.:1 Introduction
1.1 Engineering approaches for variant-rich adaptive systems
1.2 Validation and verification methods
1.3 Analysis of feature-oriented and role-based systems
1.4 Contribution
1.5 Outline
2 Preliminaries
I Feature-oriented systems
3 Feature-oriented engineering for family-based analysis
3.1 Feature-oriented development
3.2 Describing system families: The ProFeat language
3.2.1 Feature-oriented language constructs
3.2.2 Parametrization
3.2.3 Metaprogramming language extensions
3.2.4 Property specifications
3.2.5 Semantics
3.3 Implementation
3.3.1 Translation of ProFeat models
3.3.2 Post-processing of analysis results
4 Case studies and application areas
4.1 Comparing family-based and product-based analysis
4.1.1 Analysis of feature-oriented systems
4.1.2 Analysis of parametrized systems
4.2 Software product lines
4.2.1 Body sensor network
4.2.2 Elevator product line
4.3 Self-adaptive systems
4.3.1 Adaptive network system model
4.3.2 Adaptation protocol for distributed systems
II Role-based Systems
5 Formal modeling and analysis of role-based systems
5.1 The role concept
5.1.1 Towards a common notion of roles
5.1.2 The Compartment Role Object Model
5.1.3 Roles in programming languages
5.2 Compositional modeling of role-based behavior
5.2.1 Role-based automata and their composition
5.2.2 Algebraic properties of compositions
5.2.3 Coordination and semantics of RBA
6 Implementation of a role-oriented modeling language
6.1 Role-oriented modeling language
6.1.1 Declaration of the system structure
6.1.2 Definition of operational behavior
6.2 Translation of role-based models
6.2.1 Transformation to multi-action MDPs
6.2.2 Multi-action extension of PRISM
6.2.3 Translation of components
6.2.4 Translation of role-playing coordinators
6.2.5 Encoding role-playing into states
7 Exogenous coordination of roles
7.1 The exogenous coordination language Reo
7.2 Constraint automata
7.3 Embedding of role-based automata in constraint automata
7.4 Implementation
7.4.1 Exogenous coordination of PRISM modules
7.4.2 Reo for exogenous coordination within PRISM
8 Evaluation of the role-oriented approach
8.1 Experimental studies
8.1.1 Peer-to-peer file transfer
8.1.2 Self-adaptive production cell
8.1.3 File transfer with exogenous coordination
8.2 Classification
8.3 Related work
8.3.1 Role-based approaches
8.3.2 Aspect-oriented approaches
8.3.3 Feature-oriented approaches
9 Conclusion
|
238 |
Test Modeling of Dynamic Variable Systems using Feature Petri NetsPüschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
|
239 |
Digital Image Processing via Combination of Low-Level and High-Level Approaches.Wang, Dong January 2011 (has links)
With the growth of computer power, Digital Image Processing plays a more
and more important role in the modern world, including the field of industry,
medical, communications, spaceflight technology etc. There is no clear
definition how to divide the digital image processing, but normally, digital
image processing includes three main steps: low-level, mid-level and highlevel
processing.
Low-level processing involves primitive operations, such as: image preprocessing
to reduce the noise, contrast enhancement, and image sharpening.
Mid-level processing on images involves tasks such as segmentation (partitioning
an image into regions or objects), description of those objects to
reduce them to a form suitable for computer processing, and classification
(recognition) of individual objects. Finally, higher-level processing involves
"making sense" of an ensemble of recognised objects, as in image analysis.
Based on the theory just described in the last paragraph, this thesis is
organised in three parts: Colour Edge and Face Detection; Hand motion
detection; Hand Gesture Detection and Medical Image Processing.
II
In Colour Edge Detection, two new images G-image and R-image are
built through colour space transform, after that, the two edges extracted
from G-image and R-image respectively are combined to obtain the final
new edge. In Face Detection, a skin model is built first, then the boundary
condition of this skin model can be extracted to cover almost all of the skin
pixels. After skin detection, the knowledge about size, size ratio, locations
of ears and mouth is used to recognise the face in the skin regions.
In Hand Motion Detection, frame differe is compared with an automatically
chosen threshold in order to identify the moving object. For some special
situations, with slow or smooth object motion, the background modelling
and frame differencing are combined in order to improve the performance.
In Hand Gesture Recognition, 3 features of every testing image are input
to Gaussian Mixture Model (GMM), and then the Expectation Maximization
algorithm (EM)is used to compare the GMM from testing images and GMM
from training images in order to classify the results.
In Medical Image Processing (mammograms), the Artificial Neural Network
(ANN) and clustering rule are applied to choose the feature. Two
classifier, ANN and Support Vector Machine (SVM), have been applied to
classify the results, in this processing, the balance learning theory and optimized
decision has been developed are applied to improve the performance.
|
240 |
Рабочая тетрадь по русскому языку для иностранных студентов Екатеринбурга : магистерская диссертация / Russian language workbook for foreign students of EkaterinburgПолякова, Е. Ю., Polyakova, E. Y. January 2019 (has links)
Магистерская диссертация «Рабочая тетрадь по русскому языку для иностранных студентов Екатеринбурга» состоит из двух глав. Содержит 106 страниц и макет рабочей тетради по русскому языку как иностранному в приложении. Библиографический список разбит на список источников, включающий 40 рабочих тетрадей и основную литературу, включающую 71 источник. Цель исследования – на основе анализа рабочих тетрадей как учебных изданий предложить макет рабочей тетради по русскому как иностранному. Объект исследования – рабочие тетради как учебные издания. Предмет исследования – формальные и содержательные особенности рабочих тетрадей как учебных изданий. В первой главе рассмотрено понятие рабочей тетради как учебного издания, дано определение, указаны цели, функции, виды, преимущества рабочих тетрадей, а также рекомендации для их составления. Рассмотрено понятие концепции издания и ее элементы. Проведен анализ 40 рабочих тетрадей. Выявлены формальные и содержательные особенности. Во второй главе описана концепция рабочей тетради по русскому как иностранному, указаны ее цель, функции, юридический и издательский аспект. Определены формальные и содержательные особенности. Результатом проведенной работы стал макет рабочей тетради по русскому как иностранному «В Екатеринбург с любовью», представленный в приложении. / Master thesis "Russian Language Workbook for Foreign Students of Ekaterinburg" consists of two chapters. Contains 106 pages and the layout of the workbook of the Russian language as a foreign language in the appendix. The bibliographic list is divided into a list of sources, including 40 work books and the main literature, contains 71 sources. The purpose of the study is to propose a layout of a workbook of Russian as a foreign language based on the analysis of workbooks as educational publications. The object of the study - workbooks as educational publications. The subject of the research is the formal and content features of workbooks as educational publications. The first chapter considers the concept of a workbook as an educational publication, defines it, specifies goals, functions, types, advantages of workbooks, as well as recommendations for their compilation. The concept of the concept of publication and its elements are considered. The analysis of 40 workbooks revealed formal and content features. The second chapter describes the concept of a workbook in Russian as a foreign one, specifying its purpose, functions, legal and publishing aspects. Defines formal and content features. The result of this work was the layout of a workbook of Russian as a foreign language “To Ekaterinburg with Love”, presented in the appendix.
|
Page generated in 0.0538 seconds