• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 3
  • 1
  • Tagged with
  • 40
  • 40
  • 27
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Panorama da aplicação da técnica de jogos de empresas para a formação em administração nas instituições de ensino superior da Bahia

Motta, Gustavo da Silva January 2009 (has links)
97 p. / Submitted by Santiago Fabio (fabio.ssantiago@hotmail.com) on 2013-01-07T18:32:33Z No. of bitstreams: 1 55.pdf: 487913 bytes, checksum: b1f0d0cc0037450a56204eb0bc7975fd (MD5) / Made available in DSpace on 2013-01-07T18:32:33Z (GMT). No. of bitstreams: 1 55.pdf: 487913 bytes, checksum: b1f0d0cc0037450a56204eb0bc7975fd (MD5) Previous issue date: 2009 / Este trabalho tem por objetivo analisar as condições de aplicação de jogos de empresas nos cursos de administração, pelas IES do Estado da Bahia, por meio de três eixos de análise: o mapeamento de sua utilização para a formação de administradores; a análise da percepção dos alunos a respeito do significado e da contribuição dos jogos para seu processo formativo; e a análise da ótica institucional/docente, através dos Planos de Ensino das disciplinas que utilizam a técnica. Este estudo se justifica pela possibilidade de contribuir para o avanço qualitativo dos processos de ensino-aprendizagem em cursos de Administração, enfocando o desenvolvimento de competências necessárias ao profissional administrador, com a utilização de jogos de empresas. A construção de um panorama da aplicação da técnica permite identificar possíveis falhas no processo educacional e subsidiar a ação de instituições, coordenadores de curso e professores para a efetiva contribuição à formação do administrador. , Assim, optouse por desenvolver a pesquisa em 3 etapas de acordo com os objetivos específicos. Na primeira etapa, de mapeamento, foi realizada uma abordagem quantitativadescritiva, tendo sido entrevistados 47 coordenadores do curso de Administração das IES baianas. Na segunda etapa, de percepção dos alunos, os sujeitos estudados foram 93 estudantes, sendo utilizada, numa fase, a abordagem quantitativa-descritiva, com estatísticas descritivas (média e desvio-padrão), intervalos de confiança para as médias, correlação de Pearson e teste t para amostras independentes, além de regressão linear múltipla para verificar a relação de dependência entre as variáveis. Foi também usada, nesta segunda etapa abordagem quali-quanti, por meio do Discurso do Sujeito Coletivo (DSC). Na terceira etapa, de análise institucional e docente, a abordagem foi qualitativa, com a pesquisa documental aplicada aos Planos de Ensino das disciplinas de Jogos de empresas de 9 IES da Bahia. O mapeamento constatou um índice de aplicação dos jogos muito semelhante a levantamentos realizados em outros estados brasileiros. Ressaltou-se nos resultados dessa pesquisa o caráter vivencial da técnica de Jogos Empresariais. Este levantamento mostrou que: há um resultado em termos de consequências maiores no auxílio ao desenvolvimento de habilidades técnicas e humanas e menor para habilidades conceituais e que o envolvimento do aluno é determinante da aprendizagem de habilidades técnicas. Concluiu-se ainda que o envolvimento individual dos participantes com as atividades do jogo é mais importante do que o desempenho dos grupos na simulação, que a disponibilidade do software é determinante para a aplicação de jogos e que os jogos classificados como gerais devam ser aplicados como uma técnica de ensino que apóie o conteúdo de Administração Estratégica e que aqueles classificados como funcionais sejam aplicados nas disciplinas específicas do curso de Administração. / Salvador
12

Predicting Subjective Sleep Quality Using Objective Measurements in Older Adults

Sadeghi, Reza 19 May 2020 (has links)
No description available.
13

3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques.

Al-Qatawneh, Sokyna M.S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
14

DATA-DRIVEN MODELING OF IN-SERVICE PERFORMANCE OF FLEXIBLE PAVEMENTS, USING LIFE-CYCLE INFORMATION

Mohammad Hosseini, Arash January 2019 (has links)
Current pavement performance prediction models are based on the parameters such as climate, traffic, environment, material properties, etc. while all these factors are playing important roles in the performance of pavements, the quality of construction and production are also as important as the other factors. The designed properties of Hot Mix Asphalt (HMA) pavements, known as flexible pavements, are subjected to change during production and construction stages. Therefore, most of the times the final product is not the exact reflection of the design. In almost any highway project, these changes are common and likely to occur from different sources, by various causes, and at any stage. These changes often have considerable impacts on the long-term performance of a project. The uncertainty of the traffic and environmental factors, as well as the variability of material properties and pavement structural systems, are obstacles for precise prediction of pavement performance. Therefore, it is essential to adopt a hybrid approach in pavement performance prediction and design; in which deterministic values work along with stochastic ones. Despite the advancement of technology, it is natural to observe variability during the production and construction stages of flexible pavements. Quality control programs are trying to minimize and control these variations and keep them at the desired levels. Utilizing the information gathered at the production and construction stages is beneficial for managers and researchers. This information enables performing analysis and investigations of pavements based on the as-produced and as-constructed values, rather than focusing on design values. This study describes a geo-relational framework to connect the pavement life-cycle information. This framework allows more intelligent and data-driven decisions for the pavements. The constructed geo-relational database can pave the way for artificial intelligence tools to help both researchers and practitioners having more accurate pavement design, quality control programs, and maintenance activities. This study utilizes data collected as part of quality control programs to develop more accurate deterioration and performance models. This data is not only providing the true perspective of actual measurements from different pavement properties but also answers how they are distributed over the length of the pavement. This study develops and utilizes different distribution functions of pavement properties and incorporate them into the general performance prediction models. These prediction models consist of different elements that are working together to produce an accurate and detailed prediction of performance. The model predicts occurrence and intensity of four common flexible pavement distresses; such as rutting, alligator, longitudinal and transverse cracking along with the total deterioration rate at different ages and locations of pavement based on material properties, traffic, and climate of a given highway. The uniqueness of the suggested models compared to the conventional pavement models in the literature is that; it carries out a multiscale and multiphysics approach which is believed to be essential for analyzing a complex system such as flexible pavements. This approach encompasses the discretization of the system into subsystems to employ the proper computational tools required to treat them. This approach is suitable for problems with a wide range of spatial and temporal scales as well as a wide variety of different coupled physical phenomena such as pavements. Moreover, the suggested framework in this study relies on using stochastic and machine learning techniques in the analysis along with the conventional deterministic methods. In addition, this study utilizes mechanical testing to provide better insights into the behavior of the pavement. A series of performance tests are conducted on field core samples with a variety of different material properties at different ages. These tests allow connecting the lab test results with the field performance survey and the material, environmental and loading properties. Moreover, the mix volumetrics extracted from the cores assisted verifying the distribution function models. Finally, the deterioration of flexible pavements as a result of four different distresses is individually investigated and based on the findings; different models are suggested. Dividing the roadway into small sections allowed predicting finer resolution of performance. These models are proposed to assist the highway agencies s in their pavement management process and quality control programs. The resulting models showed a strong ability to predict field performance at any age during the pavements service life. The results of this study highlighted the benefits of highway agencies in adopting a geo-relational framework for their pavement network. This study provides information and guidance to evolve towards data-driven pavement life cycle management consisted of quality pre-construction, quality during construction, and deterioration post-construction. / Civil Engineering
15

Coperative learning in the classroom: systematization for discipline of chemistry / Aprendizagem cooperativa em sala de aula: sistematizaÃÃo para disciplina de quÃmica

Carolina Marques Carneiro 26 August 2013 (has links)
Universidade Federal do Cearà / O presente estudo teve o propÃsito de utilizar atravÃs da sistematizaÃÃo de aulas, a metodologia de ensino conhecida como Aprendizagem Cooperativa com estudantes do segundo semestre do Curso de Licenciatura em QuÃmica da Universidade Federal do CearÃ. Visou avaliar atravÃs das experiÃncias dos alunos, essa alternativa de ensino como metodologia possÃvel nas aulas, alÃm do interesse e a satisfaÃÃo desses alunos por esse mÃtodo de estudo em grupo. A utilizaÃÃo dessa metodologia teve Ãnfase na importÃncia da aprendizagem dos estudantes nÃo somente no Ãmbito cognitivo, mas, sobretudo no reconhecimento pelo estudo de grupo e melhoria das relaÃÃes sociais, como instrumento de aprendizagem. AtravÃs da elaboraÃÃo de planos de aula e com o auxÃlio das tÃcnicas conhecidas como Jigsaw, verificaÃÃo em pares e aprendendo juntos foi possÃvel adequar essa metodologia no ensino da disciplina, sem comprometer em momento algum o conteÃdo do programa. O reconhecimento positivo dos alunos em detrimento a essa sistemÃtica de ensino, proporcionou à essa pesquisa um atestado de que à possÃvel trabalhar conteÃdos de quÃmica utilizando essa metodologia. Por fim, elucidar como essa metodologia pode trazer contribuiÃÃes positivas nesse processo de aprendizagem. / The present study aimed to use through the systematization of lessons , teaching methodology known as Cooperative Learning with students from the second semester of the Bachelor of Chemistry , Federal University of Cearà . Aimed to evaluate through the experiences of the students , this alternative teaching methodology as possible in class , plus the interest and satisfaction of these students by this method of study group. The use of this methodology was emphasis on the importance of student learning not only in cognitive , but especially in recognition for group study and improvement of social relations , as a learning tool . Through the preparation of lesson plans and with the aid of techniques known as Jigsaw , check in pairs and learning together was possible to adapt this methodology in the teaching discipline , without compromising at any moment the program content . The positive recognition of students over to this system of education provided to this research a certificate that you can work using chemical contents of this methodology. Finally, to elucidate how this methodology can bring positive contributions in this learning process.
16

Improving Resource Management in Virtualized Data Centers using Application Performance Models

Kundu, Sajib 01 April 2013 (has links)
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
17

Estimativa da radiação solar global pelos modelos de Hargreaves e aprendizado de máquina em 11 regiões de São Paulo /Brasil /

Zamora Ortega, Lisett Rocio January 2020 (has links)
Orientador: João Francisco Escobedo / Resumo: No presente trabalho é descrito o estudo comparativo de métodos de estimativas da irradiação solar global (HG) diária através do modelo de Hargreaves-Samani (H-S) HG/HO = a ΔT0,5 e duas técnicas de Aprendizado de Máquina (AM), Máquinas de Vetores de Suporte (MVS) e Redes Neurais Artificiais (RNA). A base de dados utilizada foi obtida em 11 cidades do estado de São Paulo de diferentes classificações climáticas no período de 2013-2017. Por meio de regressão entre a transmissividade atmosférica (HG/HO) e raiz quadrada da diferença de temperatura (ΔT0,5). O modelo estatístico H-S foi calibrado e determinado para os valores da constante (a) e equações que permitem estimar HG com baixos coeficientes de determinação para duas condições:11 cidades individualmente e total. Os modelos de H–S foram validados por meio de correlações entre os valores estimados e medidos através dos indicadores de correlação (r) e rRMSE cujos valores indicaram que os modelos podem estimar HG com razoável precisão e exatidão. As técnicas computacionais, MVS e RNA, foram treinadas com 70% dos dados nas mesmas variáveis usadas no modelo de H-S, e posteriormente foram treinadas com entradas de mais 4 variáveis meteorológicas totalizando 5 combinações. Os treinos foram validados usando uma base de dados independente de 30% da base. Os indicativos estatísticos (r) das correlações mostraram que o modelo H-S pode estimar HG com baixos coeficientes de determinação. Os indicativos estatísticos rMBE, MBE, rRMSE, RMSE... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: This work describes the comparative study of methods for estimating daily global solar irradiation (HG) using the Hargreaves-Samani (H-S) model HG / HO = a ΔT0.5 and two Machine Learning techniques (AM), Support Vectors Machines (MVS) and Artificial Neural Networks (ANN). The database used was obtained in 11 cities by the state of São Paulo from different climatic classifications between period 2013-2017. Through regression between differents atmospheric transmissivity (HG / HO) and square root of the temperature difference (ΔT0.5). The H-S statistical model was calibrated and determined for the values of constant (a) and equations that allow estimating HG with low determination coefficients for two conditions:11 cities individually and total. The H–S models were validated by correlations between the estimated and measured values using the correlation indicators (r) and rRMSE, whose values indicated that the models can estimate HG with reasonable precision and exactitude. The computational techniques, MVS and RNA, were trained with 70% of the data in the same variables used in the H-S model, later they were trained with inputs of 4 more meteorological totalling 5 combinations. The training was validated using an independent database of 30%. The statistical indications of the correlations showed that the H-S model can estimate HG with low determination coefficients. The statistical indications rMBE, MBE, rRMSE, RMSE indicate that the H-S model can be used to estimate HG with r... (Complete abstract click electronic access below) / Mestre
18

Transparency of transitivity in pantomime, sign language

Charles Roger Bradley (6410666) 02 May 2020 (has links)
This dissertation investigates whether transitivity distinctions are manifest in the phonetics of linguistic and paralinguistic manual actions, namely lexical verbs and classifier constructions in American Sign Language (ASL) and gestures produced by hearing non-signers without speech (i.e., pantomime). A positive result would indicate that grammatical features are (a) transparent and (b) may thus arise from non-linguistic sources, here the visual-praxic domain. Given previous literature, we predict that transitivity is transparent in pantomime and classifier constructions, but opaque in lexical verbs. <div><br></div><div>We first collected judgments from hearing non-signers who classed pantomimes, classifier constructions, and ASL lexical verbs as unergative, unaccusative, transitive, or ditransitive. We found that non-signers consistently judged items across all three stimulus types, suggesting that there is transitivity-related information in the signed signal. </div><div><br></div><div>We then asked whether non-signers’ judging ability has its roots in a top-down or bottom-up strategy. A top-down strategy might entail guessing the meaning of the sign or pantomime and then using the guessed meaning to assess/guess its transitivity. A bottom-up strategy entails using one or more meaningful phonetic features available in the formation of the signal to judge an item. We predicted that both strategies would be available in classing pantomimes and classifier constructions, but that transitivity information would only be available top-down in lexical verbs, given that the former are argued to be more imagistic generally than lexical verbs. Further, each strategy makes a different prediction with respect to the internal representation xv of signs and pantomimes. The top-down strategy would suggest signs and pantomimes are unanalyzable wholes, whereas the bottom-up strategy would suggest the same are compositional. </div><div><br></div><div>For the top-down analysis, we correlated lexical iconicity score and a measure of the degree to which non-signers ‘agreed’ on the transitivity of an item. We found that lexical iconicity only weakly predicts non-signer judgments of transitivity, on average explaining 10-20% of the variance for each stimulus class. However, we note that this is the only strategy available for lexical verbs. </div><div><br></div><div>For the bottom-up analysis, we annotate our stimuli for phonetic and phonological features known to be relevant to transitivity and/ or event semantics in sign languages. We then apply a text classification model to try to predict transitivity from these features. As expected, our classifiers achieved stably high accuracy for pantomimes and classifier constructions, but only chance accuracy for lexical verbs. </div><div><br></div><div>Taken together, the top-down and bottom-up analyses were able to predict nonsigner transitivity judgments for the pantomimes and classifier constructions, with the bottom-up analysis providing a stronger, more convincing result. For lexical verbs, only the top-down analysis was relevant and it performed weakly, providing little explanatory power. When interpreting these results, we look to the semantics of the stimuli to explain the observed differences between classes: pantomimes and classifier constructions both encode events of motion and manipulation (by human hands), the transitivity of which may be encoded using a limited set of strategies. By contrast, lexical verbs denote a multitude of event types, with properties of those events (and not necessarily their transitivity) being preferentially encoded compared to the encoding of transitivity. That is, the resolution of transitivity is a much more difficult problem when looking at lexical verbs. </div><div><br></div><div>This dissertation contributes to the growing body of literature that appreciates how linguistic and paralinguistic forms may be both (para)linguistic and iconic at the same time. It further helps disentangle at least two different types of iconicities (lexical vs. structural), which may be selectively active in some signs or constructions xvi but not others. We also argue from our results that pantomimes are not holistic units, but instead combine elements of form and meaning in an analogous way to classifier constructions. Finally, this work also contributes to the discussion of how Language could have evolved in the species from a gesture-first perspective: The ‘understanding’ of others’ object-directed (i.e. transitive) manual actions becomes communicative.</div>
19

Supplemental Instruction and the Promotion of Construction of Knowledge

Chambers-Turner, Ruth C 01 January 2017 (has links)
Supplemental instruction (SI), a peer assisted learning model, improves course performance, retention, and graduation rates of post-secondary education students. Researchers have questioned if the success of SI is due to students becoming more aware of assessment demands or if SI also promotes construction of new knowledge. The purposes of this case study were to describe techniques utilized by SI peer leaders, explore how sociocognitive learning techniques are implemented, and explore the perceptions of supplemental instruction program stakeholders regarding sociocognitive learning techniques. The research questions focused on what techniques peer leaders are trained to implement and what peer leaders' perceptions of sociocognitive learning techniques are. Piaget's theories on cognitive conflict and construction of knowledge, Vygotsky's theories on zone of proximal development and sociocognitive learning, and Chi's framework on interactive learning provided the conceptual framework for the study. The case study was conducted at a Northeast United States community college, using interviews with SI administrators and peer leaders (n = 8), voice recordings of SI sessions, and review of training material. The constant comparative method analysis of findings suggest that peer leaders trained in traditional tutoring and sociocognitive learning techniques promote construction of knowledge, and while programmatic and peer leader goals align with sociocognitive learning techniques, student goals do not. One recommendation is to clarify the distinction of SI versus traditional tutoring. This study could lead to positive social change by contributing to expanded goals of SI resulting in enhanced quality of learning for student participants at higher education institutions.
20

Bee Shadow Recognition in Video Analysis of Omnidirectional Bee Traffic

Alavala, Laasya 01 August 2019 (has links)
Over a decade ago, beekeepers noticed that the bees were dying or disappearing without any prior health disorder. Colony Collapse Disorder (CCD) has been a major threat to bee colonies around the world which affects vital human crop pollination. Possible instigators of CCD include viral and fungal diseases, decreased genetic diversity, pesticides and a variety of other factors. The interaction among any of these potential facets may be resulting in immunity loss for honey bees and the increased likelihood of collapse. It is essential to rescue honey bees and improve the health of bee colony. Monitoring the traffic of bees helps to track the status of hive remotely. An Electronic beehive monitoring system extracts video, audio and temperature data without causing any interruption to the bee hives. This data could provide vital information on colony behavior and health. This research uses Artificial Intelligence and Computer Vision methodologies to develop and analyze technologies to monitor omnidirectional bee traffic of hives without disrupting the colony. Bee traffic means the number of bees moving in a given area in front of the hive over a given period of time. Forager traffic is the number of bees coming in and/or leaving the hive over a time. Forager traffic is a significant component in monitoring food availability and demand, colony age structure, impacts of pests and diseases, etc on hives. The goal of this research is to estimate and keep track of bee traffic by eliminating unnecessary information from video samples.

Page generated in 0.0623 seconds