• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 502
  • 202
  • 111
  • 59
  • 55
  • 39
  • 38
  • 31
  • 19
  • 17
  • 14
  • 13
  • 8
  • 6
  • 6
  • Tagged with
  • 1297
  • 144
  • 124
  • 122
  • 117
  • 113
  • 109
  • 107
  • 94
  • 88
  • 81
  • 81
  • 73
  • 70
  • 70
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Seleção e construção de features relevantes para o aprendizado de máquina. / Relevant feature selection and construction for machine learning.

Huei Diana Lee 27 April 2000 (has links)
No Aprendizado de Máquina Supervisionado - AM - é apresentado ao algoritmo de indução um conjunto de instâncias de treinamento, no qual cada instância é um vetor de features rotulado com a classe. O algoritmo de indução tem como tarefa induzir um classificador que será utilizado para classificar novas instâncias. Algoritmos de indução convencionais baseam-se nos dados fornecidos pelo usuário para construir as descrições dos conceitos. Uma representação inadequada do espaço de busca ou da linguagem de descrição do conjunto de instâncias, bem como erros nos exemplos de treinamento, podem tornar os problemas de aprendizado difícies. Um dos problemas centrais em AM é a Seleção de um Subconjunto de Features - SSF - na qual o objetivo é tentar diminuir o número de features que serão fornecidas ao algoritmo de indução. São várias as razões para a realização de SSF. A primeira é que a maioria dos algoritmos de AM, computacionalmente viáveis, não trabalham bem na presença de muitas features, isto é a precisão dos classificadores gerados pode ser melhorada com a aplicação de SSF. Ainda, com um número menor de features, a compreensibilidade do conceito induzido pode ser melhorada. Uma terceira razão é o alto custo para coletar e processar grande quantidade de dados. Existem, basicamente, três abordagens para a SSF: embedded, filtro e wrapper. Por outro lado, se as features utilizadas para descrever os exemplos de treinamento são inadequadas, os algoritmos de aprendizado estão propensos a criar descrições excessivamente complexas e imprecisas. Porém, essas features, individualmente inadequadas, podem algumas vezes serem, convenientemente, combinadas gerando novas features que podem mostrar-se altamente representativas para a descrição de um conceito. O processo de construção de novas features é conhecido como Construção de Features ou Indução Construtiva - IC. Neste trabalho são enfocadas as abordagens filtro e wrapper para a realização de SSF, bem como a IC guiada pelo conhecimento. É descrita uma série de experimentos usando SSF e IC utilizando quatro conjuntos de dados naturais e diversos algoritmos simbólicos de indução. Para cada conjunto de dados e cada indutor, são realizadas várias medidas, tais como, precisão, tempo de execução do indutor e número de features selecionadas pelo indutor. São descritos também diversos experimentos realizados utilizando três conjuntos de dados do mundo real. O foco desses experimentos não está somente na avaliação da performance dos algoritmos de indução, mas também na avaliação do conhecimento extraído. Durante a extração de conhecimento, os resultados foram apresentados aos especialistas para que fossem feitas sugestões para experimentos futuros. Uma parte do conhecimento extraído desses três estudos de casos foram considerados muito interessantes pelos especialistas. Isso mostra que a interação de diferentes áreas de conhecimento, neste caso específico, áreas médica e computacional, pode produzir resultados interessantes. Assim, para que a aplicação do Aprendizado de Máquina possa gerar frutos é necessário que dois grupos de pesquisadores sejam unidos: aqueles que conhecem os métodos de AM existentes e aqueles com o conhecimento no domínio da aplicação para o fornecimento de dados e a avaliação do conhecimento adquirido. / In supervised Machine Learning - ML - an induction algorithm is typically presented with a set of training instances, where each instance is described by a vector of feature values and a class label. The task of the induction algorithm (inducer) is to induce a classifier that will be useful in classifying new cases. Conventional inductive-learning algorithms rely on existing (user) provided data to build their descriptions. Inadequate representation space or description language as well as errors in training examples can make learning problems be difficult. One of the main problems in ML is the Feature Subset Selection - FSS - problem, i.e. the learning algorithm is faced with the problem of selecting some subset of features upon which to focus its attention, while ignoring the rest. There are a variety of reasons that justify doing FSS. The first reason that can be pointed out is that most of the ML algorithms, that are computationally feasible, do not work well in the presence of a very large number of features. This means that FSS can improve the accuracy of the classifiers generated by these algorithms. Another reason to use FSS is that it can improve comprehensibility, i.e. the human ability of understanding the data and the rules generated by symbolic ML algorithms. A third reason for doing FSS is the high cost in some domains for collecting data. Finally, FSS can reduce the cost of processing huge quantities of data. Basically, there are three approaches in Machine Learning for FSS: embedded, filter and wrapper approaches. On the other hand, if the provided features for describing the training examples are inadequate, the learning algorithms are likely to create excessively complex and inaccurate descriptions. These individually inadequate features can sometimes be combined conveniently, generating new features which can turn out to be highly representative to the description of the concept. The process of constructing new features is called Constructive Induction - CI. Is this work we focus on the filter and wrapper approaches for FSS as well as Knowledge-driven CI. We describe a series of experiments for FSS and CI, performed on four natural datasets using several symbolic ML algorithms. For each dataset, various measures are taken to compare the inducers performance, for example accuracy, time taken to run the inducers and number of selected features by each evaluated induction algorithm. Several experiments using three real world datasets are also described. The focus of these three case studies is not only comparing the induction algorithms performance, but also the evaluation of the extracted knowledge. During the knowledge extraction step results were presented to the specialist, who gave many suggestions for the development of further experiments. Some of the knowledge extracted from these three real world datasets were found very interesting by the specialist. This shows that the interaction between different areas, in this case, medical and computational areas, may produce interesting results. Thus, two groups of researchers need to be put together if the application of ML is to bear fruit: those that are acquainted with the existing ML methods, and those with expertise in the given application domain to provide training data.
232

Fundamental validity issues of an english as a foreign language test: a process-oriented approach to examining the reading construct as measured by the DR Congo English state examination

Katalayi, Godefroid Bantumbandi January 2014 (has links)
Doctor Educationis / The study aims to investigate the fundamental validity issues that can affect the DR Congo English state examination, a national exit test administered to high school final year students for certification. The study aspires to generate an understanding of the potential issues that affect the construct validity of a test within the epistemological stance that supports a strong relationship between test construct and test context. The study draws its theoretical underpinning from three theories: the validity theory that provides a theoretical ground necessary for understanding the quality of tests needed for assessing students’ reading abilities; the construction-integration theory that provides an understanding of how texts used in reading assessments are processed and understood by the examinees; and the strategic competence theory that explains how examinees deploy strategies to complete test tasks, and the extent to which these strategies tap into the reading construct. Furthermore, the study proposes a reading model that signposts the social context of testing; therefore, conceptualizing reading as both a cognitive and a social process. As research design, the study adopts an exploratory design using both qualitative and quantitative data. Besides, the study uses protocol analysis and content analysis methodologies. While the former provides an understanding of the cognitive processes that mediate the reading construct and test performance so as to explore the different strategies examinees use to answer the English state examination (henceforth termed ESE) test questions, the latter examines the content of the different ESE papers so as to identify the different textual and item features that potentially impact on examinees’ performance on the ESE tasks. As instruments, the study uses a concurrent strategies questionnaire administered to 496 student-participants, a contextual questionnaire administered to 26 student-participants, a contextual questionnaire administered to 27 teacher-participants, and eight tests administered to 496 student-participants. The findings indicate that, the ESE appears to be less appropriate to the ESE context as the majority of ESE test items target careful reading than expeditious reading; on the one hand, and reading at global level than to reading at local level; on the other hand. The findings also indicate that the ESE tasks hardly take account of the text structure and the underlined cognitive demands appropriate to the text types. Besides, the ESE fails to include other critical aspects of the reading construct. Finally, the findings also indicate that the ESE constructors may not be capable to construct an ESE with five functioning distractors as expected. Moreover, the inclusion of the implicit option 6 overlaps with the conceptual meaning of this option. The entire process of the present study has generated some insights that can advance our understanding of the construct validity of reading tests. These insights are: (a) the concept of validity is an evolving and context-dependent concept, (b) reading construct cannot be examined outside the actual context of reading activity, (c) elimination of distractors can sometimes be a construct-relevant strategy, (d) construct underrepresentation is a context-dependent concept, and (e) a reading test cannot be valid in all contexts. The suggested proposal for the improvement of the ESE requires the Congolese government through its Department of Education to (a) always conduct validation studies to justify the use of the ESE, (b) always consider the actual context of reading activity while developing the ESE, (c) revisit the meanings and interpretations of the ESE scores, (d) ensure the appropriateness of tasks to be included in the ESE, (e) ensure the construct representativeness of the ESE tasks, (f) revisit the number of questions to be included in the ESE, (g) avoid bias in the ESE texts in order to ensure fairness, (h) diversify the genres of ESE texts, (i) ensure the coherence of ESE texts through the use of transitions and cohesive devices, (j) ensure that the order of test questions is in alignment with the order of text information, (k) revisit the structure and length of the texts to be included in the ESE, (l) revisit the number of alternatives to be included in the ESE, and (m) reconsider the use of the implicit alternative 6.
233

Určování období vzniku interpretace za pomoci metod parametrizace hudebního signálu / Recognizing the historical period of interpretation based on the music signal parameterization

Král, Vítězslav January 2018 (has links)
The aim of this semestral work is to summarize the existing knowledge from the area of comparison of musical recordings and to implement an evaluation system for determining the period of creation using the music signal parameterization. In the first part of this work are describe representations which can music take. Next, there is a cross-section of parameters that can be extracted from music recordings provides information on the dynamics, tempo, color, or time development of the music’s recording. In the second part is described evaluation system and its individual sub-blocks. The input data for this evaluation system is a database of 56 sound recordings of the first movement of Beethoven’s 5th Symphony. The last chapter is dedicated to a summary of the achieved results.
234

Rozpoznávání markantních rysů na nábojnicích / Recognition of Important Features on Weapon Shells

Janáček, Matej January 2010 (has links)
The text covers the automated recognition and comparison of features on used cartridge cases, in order to increase e ectivity of similar manual ballistic systems. The work is addresing the issue of programming the application for automated recognition and comparison of features on used cartridge cases.
235

Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge / Contributions to the Information Fusion : application to Obstacle Recognition in Visible and Infrared Images

Apatean, Anca Ioana 15 October 2010 (has links)
Afin de poursuivre et d'améliorer la tâche de détection qui est en cours à l'INSA, nous nous sommes concentrés sur la fusion des informations visibles et infrarouges du point de vue de reconnaissance des obstacles, ainsi distinguer entre les véhicules, les piétons, les cyclistes et les obstacles de fond. Les systèmes bimodaux ont été proposées pour fusionner l'information à différents niveaux: des caractéristiques, des noyaux SVM, ou de scores SVM. Ils ont été pondérés selon l'importance relative des capteurs modalité pour assurer l'adaptation (fixe ou dynamique) du système aux conditions environnementales. Pour évaluer la pertinence des caractéristiques, différentes méthodes de sélection ont été testés par un PPV, qui fut plus tard remplacée par un SVM. Une opération de recherche de modèle, réalisée par 10 fois validation croisée, fournit le noyau optimisé pour SVM. Les résultats ont prouvé que tous les systèmes bimodaux VIS-IR sont meilleurs que leurs correspondants monomodaux. / To continue and improve the detection task which is in progress at INSA laboratory, we focused on the fusion of the information provided by visible and infrared cameras from the view point of an Obstacle Recognition module, this discriminating between vehicles, pedestrians, cyclists and background obstacles. Bimodal systems have been proposed to fuse the information at different levels:of features, SVM's kernels, or SVM’s matching-scores. These were weighted according to the relative importance of the modality sensors to ensure the adaptation (fixed or dynamic) of the system to the environmental conditions. To evaluate the pertinence of the features, different features selection methods were tested by a KNN classifier, which was later replaced by a SVM. An operation of modelsearch, performed by 10 folds cross-validation, provides the optimized kernel for the SVM. The results have proven that all bimodal VIS-IR systems are better than their corresponding monomodal ones.
236

Alternative Approaches for the Registration of Terrestrial Laser Scanners Data using Linear/Planar Features

Dewen Shi (9731966) 15 December 2020 (has links)
<p>Static terrestrial laser scanners have been increasingly used in three-dimensional data acquisition since it can rapidly provide accurate measurements with high resolution. Several scans from multiple viewpoints are necessary to achieve complete coverage of the surveyed objects due to occlusion and large object size. Therefore, in order to reconstruct three-dimensional models of the objects, the task of registration is required to transform several individual scans into a common reference frame. This thesis introduces three alternative approaches for the coarse registration of two adjacent scans, namely, feature-based approach, pseudo-conjugate point-based method, and closed-form solution. In the feature-based approach, linear and planar features in the overlapping area of adjacent scans are selected as registration primitives. The pseudo-conjugate point-based method utilizes non-corresponding points along common linear and planar features to estimate transformation parameters. The pseudo-conjugate point-based method is simpler than the feature-based approach since the partial derivatives are easier to compute. In the closed-form solution, a rotation matrix is first estimated by using a unit quaternion, which is a concise description of the rotation. Afterward, the translation parameters are estimated with non-corresponding points along the linear or planar features by using the pseudo-conjugate point-based method. Alternative approaches for fitting a line or plane to data with errors in three-dimensional space are investigated.</p><p><br></p><p>Experiments are conducted using simulated and real datasets to verify the effectiveness of the introduced registration procedures and feature fitting approaches. The proposed two approaches of line fitting are tested with simulated datasets. The results suggest that these two approaches can produce identical line parameters and variance-covariance matrix. The three registration approaches are tested with both simulated and real datasets. In the simulated datasets, all three registration approaches produced equivalent transformation parameters using linear or planar features. The comparison between the simulated linear and planar features shows that both features can produce equivalent registration results. In the real datasets, the three registration approaches using the linear or planar features also produced equivalent results. In addition, the results using real data indicates that the registration approaches using planar features produced better results than the approaches using linear features. The experiments show that the pseudo-conjugate point-based approach is easier to implement than the feature-based approach. The pseudo-conjugate point-based method and feature-based approach are nonlinear, so an initial guess of transformation parameters is required in these two approaches. Compared to the nonlinear approaches, the closed-form solution is linear and hence it can achieve the registration of two adjacent scans without the requirement of any initial guess for transformation parameters. Therefore, the pseudo-conjugate point-based method and closed-form solution are the preferred approaches for coarse registration using linear or planar features. In real practice, the planar features would have a better preference when compared to linear features since the linear features are derived indirectly by the intersection of neighboring planar features. To get enough lines with different orientations, planes that are far apart from each other have to be extrapolated to derive lines.</p><div><br></div>
237

Výuka výslovnosti angličtiny jako cizího jazyka / Pronunciation instruction in the context of TEFL

Nelson, Sabina January 2019 (has links)
Pronunciation instruction in the TEFL classroom has long been a neglected area regardless of its importance for the students. The data in the literature shows that teachers are generally not ready to provide pronunciation instruction for a variety of reasons: lack of qualification and training, theoretical and practical knowledge, time and motivation. The present thesis explores the current situation of pronunciation instruction at a private language school in the Czech Republic using of classroom observations and teacher and student surveys. The results confirm the initial hypothesis that pronunciation instruction including pronunciation error correction is nearly non-existent or occurs sporadically in the classroom. Only one out of four teachers (T1) included explicit pronunciation information into his teaching. The only pronunciation error correction technique observed with the four teachers was a recast which proved to be ineffective in most cases. Even though the teachers and students are generally aware of the importance of pronunciation in foreign language acquisition, their individual beliefs and attitudes towards pronunciation learning and teaching greatly differ. Key words: pronunciation, TEFL, explicit instruction, segmental features, suprasegmental features, teacher and student cognition
238

Formal Analysis of Variability-Intensive and Context-Sensitive Systems

Chrszon, Philipp 29 January 2021 (has links)
With the widespread use of information systems in modern society comes a growing demand for customizable and adaptable software. As a result, systems are increasingly developed as families of products adapted to specific contexts and requirements. Features are an established concept to capture the commonalities and variability between system variants. Most prominently, the concept is applied in the design, modeling, analysis, and implementation of software product lines where products are built upon a common base and are distinguished by their features. While adaptations encapsulated within features are mainly static and remain part of the system after deployment, dynamic adaptations become increasingly important. Especially interconnected mobile devices and embedded systems are required to be context-sensitive and (self-)adaptive. A promising concept for the design and implementation of such systems are roles as they capture context-dependent and collaboration-specific behavior. A major challenge in the development of feature-oriented and role-based systems are interactions, i.e., emergent behavior that arises from the combination of multiple features or roles. As the number of possible combinations is usually exponential in the number of features and roles, the detection of such interactions is difficult. Since unintended interactions may compromise the functional correctness of a system and may lead to reduced efficiency or reliability, it is desirable to detect them as early as possible in the development process. The goal of this thesis is to adopt the concepts of features and roles in the formal modeling and analysis of systems and system families. In particular, the focus is on the quantitative analysis of operational models by means of probabilistic model checking for supporting the development process and for ensuring correctness. The tool ProFeat, which enables a quantitative analysis of stochastic system families defined in terms of features, has been extended with additional language constructs, support for a one-by-one analysis of system variants, and a symbolic representation of analysis results. The implementation is evaluated by means of several case studies which compare different analysis approaches and show how ProFeat facilitates a family-based quantitative analysis of systems. For the compositional modeling of role-based systems, role-based automata (RBA) are introduced. The thesis presents a modeling language that is based on the input language of the probabilistic model checker PRISM to compactly describe RBA. Accompanying tool support translates RBA models into the PRISM language to enable the formal analysis of functional and non-functional properties, including system dynamics, contextual changes, and interactions. Furthermore, an approach for a declarative and compositional definition of role coordinators based on the exogenous coordination language Reo is proposed. The adequacy of the RBA approach for detecting interactions within context-sensitive and adaptive systems is shown by several case studies.:1 Introduction 1.1 Engineering approaches for variant-rich adaptive systems 1.2 Validation and verification methods 1.3 Analysis of feature-oriented and role-based systems 1.4 Contribution 1.5 Outline 2 Preliminaries I Feature-oriented systems 3 Feature-oriented engineering for family-based analysis 3.1 Feature-oriented development 3.2 Describing system families: The ProFeat language 3.2.1 Feature-oriented language constructs 3.2.2 Parametrization 3.2.3 Metaprogramming language extensions 3.2.4 Property specifications 3.2.5 Semantics 3.3 Implementation 3.3.1 Translation of ProFeat models 3.3.2 Post-processing of analysis results 4 Case studies and application areas 4.1 Comparing family-based and product-based analysis 4.1.1 Analysis of feature-oriented systems 4.1.2 Analysis of parametrized systems 4.2 Software product lines 4.2.1 Body sensor network 4.2.2 Elevator product line 4.3 Self-adaptive systems 4.3.1 Adaptive network system model 4.3.2 Adaptation protocol for distributed systems II Role-based Systems 5 Formal modeling and analysis of role-based systems 5.1 The role concept 5.1.1 Towards a common notion of roles 5.1.2 The Compartment Role Object Model 5.1.3 Roles in programming languages 5.2 Compositional modeling of role-based behavior 5.2.1 Role-based automata and their composition 5.2.2 Algebraic properties of compositions 5.2.3 Coordination and semantics of RBA 6 Implementation of a role-oriented modeling language 6.1 Role-oriented modeling language 6.1.1 Declaration of the system structure 6.1.2 Definition of operational behavior 6.2 Translation of role-based models 6.2.1 Transformation to multi-action MDPs 6.2.2 Multi-action extension of PRISM 6.2.3 Translation of components 6.2.4 Translation of role-playing coordinators 6.2.5 Encoding role-playing into states 7 Exogenous coordination of roles 7.1 The exogenous coordination language Reo 7.2 Constraint automata 7.3 Embedding of role-based automata in constraint automata 7.4 Implementation 7.4.1 Exogenous coordination of PRISM modules 7.4.2 Reo for exogenous coordination within PRISM 8 Evaluation of the role-oriented approach 8.1 Experimental studies 8.1.1 Peer-to-peer file transfer 8.1.2 Self-adaptive production cell 8.1.3 File transfer with exogenous coordination 8.2 Classification 8.3 Related work 8.3.1 Role-based approaches 8.3.2 Aspect-oriented approaches 8.3.3 Feature-oriented approaches 9 Conclusion
239

Test Modeling of Dynamic Variable Systems using Feature Petri Nets

Püschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
240

Digital Image Processing via Combination of Low-Level and High-Level Approaches.

Wang, Dong January 2011 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. There is no clear definition how to divide the digital image processing, but normally, digital image processing includes three main steps: low-level, mid-level and highlevel processing. Low-level processing involves primitive operations, such as: image preprocessing to reduce the noise, contrast enhancement, and image sharpening. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. Finally, higher-level processing involves "making sense" of an ensemble of recognised objects, as in image analysis. Based on the theory just described in the last paragraph, this thesis is organised in three parts: Colour Edge and Face Detection; Hand motion detection; Hand Gesture Detection and Medical Image Processing. II In Colour Edge Detection, two new images G-image and R-image are built through colour space transform, after that, the two edges extracted from G-image and R-image respectively are combined to obtain the final new edge. In Face Detection, a skin model is built first, then the boundary condition of this skin model can be extracted to cover almost all of the skin pixels. After skin detection, the knowledge about size, size ratio, locations of ears and mouth is used to recognise the face in the skin regions. In Hand Motion Detection, frame differe is compared with an automatically chosen threshold in order to identify the moving object. For some special situations, with slow or smooth object motion, the background modelling and frame differencing are combined in order to improve the performance. In Hand Gesture Recognition, 3 features of every testing image are input to Gaussian Mixture Model (GMM), and then the Expectation Maximization algorithm (EM)is used to compare the GMM from testing images and GMM from training images in order to classify the results. In Medical Image Processing (mammograms), the Artificial Neural Network (ANN) and clustering rule are applied to choose the feature. Two classifier, ANN and Support Vector Machine (SVM), have been applied to classify the results, in this processing, the balance learning theory and optimized decision has been developed are applied to improve the performance.

Page generated in 0.0512 seconds