• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 11
  • Tagged with
  • 1243
  • 1243
  • 278
  • 269
  • 255
  • 255
  • 167
  • 164
  • 164
  • 130
  • 129
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Prova Brasil e Prova SAEP: (in)convergências teóricas e metodológicas no campo da linguagem / Prova Brasil and SAEP: theoretical and methodological (in) convergences in the field of language

Záttera, Pricilla 11 August 2017 (has links)
Submitted by Rosangela Silva (rosangela.silva3@unioeste.br) on 2018-02-20T18:14:38Z No. of bitstreams: 2 Pricilla Záttera_25-09.pdf: 2489645 bytes, checksum: e5ded6a5bf37a2e173eb196decdcc969 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-02-20T18:14:38Z (GMT). No. of bitstreams: 2 Pricilla Záttera_25-09.pdf: 2489645 bytes, checksum: e5ded6a5bf37a2e173eb196decdcc969 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The quality of Brazilian education has often been on the agenda of discussions and reflections of researchers and educators, whether in what concerns the classroom floor or in the production and socialization of scientific knowledge transformed into teaching content, or in terms of public policies in the educational field. Although they are close dimensions, in this research, we turn towards public policies, making the cut for one of its components, the large-scale evaluations, which are presented as balancers, among other aspects, of advances (or not), in learning from the students. With its advent in the 1990s, large-scale external evaluations have taken center position in educational policies, and are also a parameter for decision-making regarding investments or financial cuts to school. Among the evaluation systems in Brazil, and imposed in schools, two are highlighted: The Brazilian National Basic Education Assessment System, known as SAEB, which develops the Prova Brasil (a population-based assessment of public schools applied to 5th and 9th grade students) and the Paraná Basic Education Assessment System (SAEP), which develops the Saep assessment (applied to 6th and 1st year students, 9th and 3rd year students). Both systems assessment focuses only on the Portuguese Language and Mathematics subjects. Considering the importance that has been assigned to SAEB / PB and SAEP / PS, leading school systems to adopt one-off measures (such as preparatory courses, for example) to obtain quantitative results that attest to the quality of teaching and learning, we raised the following problem with the Portuguese language test: because they are evaluation systems aimed at the same students, who attend the same schools, are there any interrelations between their objectives, their methodologies and their conceptions of language and reading? This enquiry was questioned through a qualitative, interpretative research of socio-historical approach, of documentary analysis, situated in the Applied Linguistics studies field, aiming to observe if there are interrelations between these systems regarding to the mentioned aspects, and about how this dialogue happens. To fulfill this objective, we first describe and analyze the configuration of these systems through the study of official documents, and then proceed to the comparison of approximations and / or distances between them. As results, we find that these evaluations are convergent, especially with regard to methodology and conceptions of language and reading. Regarding the inconsistencies found, they occur in the formulation of the objectives of the evaluations, more specifically in the availability of the results of these evaluations. / A qualidade da educação brasileira tem estado frequentemente na agenda de discussões e reflexões de pesquisadores e de educadores, seja no que se refere ao que acontece no chão da sala de aula, seja na produção e na socialização do conhecimento científico transformado em conteúdo de ensino, seja no que diz respeito às políticas públicas no campo educacional. Embora sejam dimensões imbricadas, nesta pesquisa, voltamo-nos para as políticas públicas, fazendo um recorte para um de seus componentes, as avaliações em larga escala, que se apresentam como abalançadoras, dentre outros aspectos, dos avanços (ou não), na aprendizagem dos alunos. Com seu surgimento nos anos 1990, as avaliações externas em larga escala passaram a ter posição central nas políticas educacionais, sendo, inclusive, parâmetro para tomadas de decisão quanto a investimentos ou a cortes de recursos financeiros na/para a escola. Dentre os sistemas de avaliação existentes no Brasil, e impostos às escolas, destacamos dois: o Sistema de Avaliação da Educação Básica (SAEB), que formula a Prova Brasil (aplicada a alunos de 5º e 9º anos), e o Sistema de Avaliação da Educação Básica do Paraná (SAEP), que formula a Prova Saep (aplicada a alunos do 6º e 1º anos, e 9º e 3º anos). As provas de ambos os sistemas focalizam apenas as disciplinas de Língua Portuguesa e Matemática. Considerando a importância que tem sido atribuída ao SAEB/PB e o SAEP/PS, levando os sistemas escolares a adotarem medidas pontuais (como cursos preparatórios para a realização das provas, por exemplo) para obtenção de resultados quantitativos que atestem a qualidade da aprendizagem e do ensino, levantamos o seguinte problema quanto à prova de Língua Portuguesa: por se tratarem de sistemas de avaliação voltados para os mesmos estudantes, que frequentam as mesmas escolas, há inter-relações entre seus objetivos, suas metodologias e suas concepções de linguagem e de leitura? Essa indagação foi problematizada por meio de uma pesquisa qualitativa, interpretativista de abordagem sócio-histórica, de análise documental, situada no âmbito da Linguística Aplicada, tendo como objetivo observar se há (in)convergências entre esses sistemas no que se refere aos aspectos mencionados, e sobre como se dá esse diálogo. Para atender a esse objetivo, primeiramente descrevemos e analisamos a configuração desses sistemas por meio do estudo de documentos oficiais, e em seguida procedemos ao cotejo de aproximações e/ou distanciamentos entre ambos. Como resultados, constatamos que essas avaliações são convergentes, principalmente no que diz respeito à metodologia e às concepções de linguagem e de leitura. Quanto às inconvergências encontradas, elas ocorrem na formulação dos objetivos das avaliações, mais especificamente na disponibilização dos resultados dessas avaliações.
472

Enem: uma análise retrospectiva e prospectiva dos riscos associados em ser mais que uma avaliação diagnóstica

Silva, Denio Menezes da 18 July 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-10T11:34:54Z No. of bitstreams: 1 deniomenezesdasilva.pdf: 1784419 bytes, checksum: f2faeac3778674f1e4ed5498b7e5720d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-10T15:42:08Z (GMT) No. of bitstreams: 1 deniomenezesdasilva.pdf: 1784419 bytes, checksum: f2faeac3778674f1e4ed5498b7e5720d (MD5) / Made available in DSpace on 2017-03-10T15:42:08Z (GMT). No. of bitstreams: 1 deniomenezesdasilva.pdf: 1784419 bytes, checksum: f2faeac3778674f1e4ed5498b7e5720d (MD5) Previous issue date: 2016-07-18 / A presente dissertação foi desenvolvida no âmbito do Mestrado Profissional em Gestão e Avaliação da Educação (PPGP) do Centro de Políticas Públicas e Avaliação da Educação da Universidade Federal de Juiz de Fora (CAEd/UFJF). O caso de gestão discutiu a realização do Enem, que hoje, no Brasil, é considerado o exame em larga escala mais conhecido e, certamente, o maior sistema logístico de aplicação de teste padronizado em termos de cobertura nacional. O objetivo geral foi analisar a fase de aplicação das provas do exame, para entender como se dá a relação entre o prescrito no plano da gestão do Inep e o organizado no plano das instituições que realizam a aplicação das provas do Enem. Verificou-se que o crescimento da escala de aplicação do exame se deu pós 2010, quando seu resultado passou a ser utilizado como um importante instrumento do conjunto de políticas do Ministério da Educação, substituindo os tradicionais vestibulares das instituições públicas. As novas finalidades agregadas aos resultados fizeram com que sua operacionalização passasse a ser qualitativamente diferente daquelas enfrentadas nos primeiros anos de realização, sobretudo em relação aos riscos e vulnerabilidades, ou à possibilidade do vazamento prematuro do conteúdo da prova. Esta pesquisa foi exploratória e descritiva, de cunho qualitativo, tendo como procedimento metodológico o levantamento de dados utilizando a técnica de entrevista de dois grupos focais com um roteiro semiestruturado com os gestores do Inep e os profissionais das instituições aplicadoras. Houve um avanço significativo na gestão da prescrição e na organização do exame, estabelecendo-se, ao longo do período de 2010-2015, um modelo participativo e compartilhado dos dois planos. Entretanto, reconhecem-se os riscos ainda existentes, sobretudo na dimensão humana do processo de aplicação. Foram identificadas 16 ações para o plano de intervenções nas futuras edições do Enem, sendo que 07 estão focadas na dimensão humana, 06 na dimensão procedimental, e 03 na dimensão de sistema e controle. / This work was developed in the Professional Master in Management and Education Evaluation (PPGP) of the Center for Public Policy and Education Evaluation of the Federal University of Juiz de Fora (CAEd / UFJF). The management case discusses the implementation of Enem (High School Brazilian Test), which today is considered the best known exam in large scale in Brazil and certainly the largest test application logistics system standardized in terms of national coverage. The general aim was to analyze the phase of the test implementation, in order to understand how the relationship is between what was prescribed on the management plan of INEP (National Institute of Educational Studies Anísio Teixeira) and what was organized by the institutions that work on Enem implementation. It was noted an increase on the test range of application after 2010, when the students’ results came to be used as important tools for the set of policies by the Ministry of Education (MEC), replacing the traditional entrance tests of College public institutions. The new purposes aggregated to the results have made its operation to be qualitatively different from those faced in the first years of implementation, particularly in relation to the risks and vulnerabilities or to the possibility of premature leakage of the test content. This research has a qualitative nature and was exploratory and descriptive. Its methodological procedure of data collection was the interview technique with two focus groups, for whom we made questions from an semi-structured list. The focus groups had INEP’s managers and professionals of the companies that apply the tests. There have been significant advances in the test prescription and organization management that over the period of 2010-2015 established a participatory and shared model in both planes. However, there are risks yet, especially in the application process human dimension. So, 16 actions were identified for the intervention plan towards Enem’s future editions, with 7 focused in human dimension, 06 focused in procedural dimension, and 3 in the system and control dimension.
473

Performance of IEEE 802.15.4 beaconless-enabled protocol for low data rate ad hoc wireless sensor networks

Iqbal, Muhamad Syamsu January 2016 (has links)
This thesis focuses on the enhancement of the IEEE 802.15.4 beaconless-enabled MAC protocol as a solution to overcome the network bottleneck, less flexible nodes, and more energy waste at the centralised wireless sensor networks (WSN). These problems are triggered by mechanism of choosing a centralised WSN coordinator to start communication and manage the resources. Unlike IEEE 802.11 standard, the IEEE 802.15.4 MAC protocol does not include method to overcome hidden nodes problem. Moreover, understanding the behaviour and performance of a large-scale WSN is a very challenging task. A comparative study is conducted to investigate the performance of the proposed ad hoc WSN both over the low data rate IEEE 802.15.4 and the high data rate IEEE 802.11 standards. Simulation results show that, in small-scale networks, ad hoc WSN over 802.15.4 outperforms the WSN where it improves 4-key performance indicators such as throughput, PDR, packet loss, and energy consumption by up to 22.4%, 17.1%, 34.1%, and 43.2%, respectively. Nevertheless, WSN achieves less end-to-end delay; in this study, it introduces by up to 2.0 ms less delay than that of ad hoc WSN. Furthermore, the ad hoc wireless sensor networks work well both over IEEE 802.15.4 and IEEE 802.11 protocols in small-scale networks with low traffic loads. The performance of IEEE 802.15.4 declines for the higher payload size since this standard is dedicated to low rate wireless personal access networks. A deep performance investigation of the IEEE 802.15.4 beaconless-enabled wireless sensor network (BeWSN) in hidden nodes environment has been conducted and followed by an investigation of network overhead on ad hoc networks over IEEE 802.11 protocol. The result of investigation evinces that the performance of beaconless-enabled ad hoc wireless sensor networks deteriorates as indicated by the degradation of throughput and packet reception by up to 72.66 kbps and 35.2%, respectively. In relation to end-to-end delay, however, there is no significant performance deviation caused by hidden nodes appearance. On the other hand, preventing hidden node effect by implementing RTS/CTS mechanism introduces significant overhead on the network that applies low packet size. Therefore, this handshaking method is not suitable for low rate communications protocol such as IEEE 802.15.4 standard. An evaluation study of a 101-node large-scale beaconless-enabled wireless sensor networks over IEEE 802.15.4 protocol has been carried out after the nodes deployment model was arranged. From the experiment, when the number of connection densely increases, then the probability of packet delivery decreases by up to 40.5% for the low payload size and not less than of 44.5% for the upper payload size. Meanwhile, for all sizes of payload applied to the large-scale ad hoc wireless sensor network, it points out an increasing throughput whilst the network handles more connections among sensor nodes. In term of dropped packet, it confirms that a fewer data drops at smaller number of connecting nodes on the network where the protocol outperforms not less than of 34% for low payload size of 30 Bytes. The similar trend obviously happens on packet loss. In addition, the simulation results show that the smaller payload size performs better than the bigger one in term of network latency, where the payload size of 30 Bytes contributes by up to 41.7% less delay compared with the contribution of the payload size of 90 Bytes.
474

The Design and Implementation of Optimization Approaches for Large Scale Ontology Alignment in SAMBO

Li, Huanyu January 2017 (has links)
The current World Wide Web provides a convenient way for people to acquire information,but it does not have the ability to manipulate semantics. In other words, peoplecan access data from web pages efficiently but computer programs cannot satisfy effectivedata reuse and sharing. Tim Berners-Lee as the inventor ofWorldWideWeb together withJames Hendler and Ora Lassila, proposed the idea of Semantic Web that is expected as anevolution to existing Web. The knowledge representation for Semantic Web witnessed thedevelopment from extensible makeup language (XML) and resource description framework(RDF) to ontologies. A large quantity of researchers utilize ontologies to expressconcepts, relations and relevant semantics in specific domains. However, different researchersmay have diverse comprehension about knowledge that brings inconsistentinformation in same or similar ontologies. SAMBO is an ontology alignment system that was designed and implemented by ADITof Linköping University in 2005. Shortly after implementation, SAMBO could accomplishmost tasks of ontology alignment. Nevertheless, as the scale grows rapidly, SAMBO couldnot achieve large scale ontology alignment. The primary job of this thesis is to optimizeexisting SAMBO system to fulfill alignment of large scale ontologies. The principal parts of this thesis are as follows. First, we achieve an analysis on currenttop ontology alignment systems, AML and LogMap which are capable of aligning largescale ontologies. This analysis aims to obtain the features in the design of high-quality systems.Then, we analyze existing SAMBO to figure out which aspects need to be optimized.We obtain the result that SAMBO should be improved in data structure, database designand parallel matching. Thus, we propose the design of optimization approaches and givethe implementation. Finally, we evaluate the new system with large scale ontologies andacquire desired results.
475

An Improved Design and Implementation of the Session-based SAMBO with Parallelization Techniques and MongoDB

Zhao, Yidan January 2017 (has links)
The session-based SAMBO is an ontology alignment system involving MySQL to store matching results. Currently, SAMBO is able to align most ontologies within acceptable time. However, when it comes to large scale ontologies, SAMBO fails to reach the target. Thus, the main purpose of this thesis work is to improve the performance of SAMBO, especially in the case of matching large scale ontologies.  To reach the purpose, a comprehensive literature study and an investigation on two outstanding large scale ontology system are carried out with the aim of setting the improvement directions. A detailed investigation on the existing SAMBO is conducted to figure out in which aspects the system can be improved. Parallel matching process optimization and data management optimization are determined as the primary optimization goal of the thesis work. In the following, a few relevant techniques are studied and compared. Finally, an optimized design is proposed and implemented.  System testing results of the improved SAMBO show that both parallel matching process optimization and data management optimization contribute greatly to improve the performance of SAMBO. However the execution time of SAMBO to align large scale ontologies with database interaction is still unacceptable.
476

Reducing the Search Space of Ontology Alignment Using Clustering Techniques

Gao, Zhiming January 2017 (has links)
With the emerging amount of information available in the internet, how to make full use of this information becomes an urgent issue. One of the solutions is using ontology alignment to aggregate different sources of information in order to get comprehensive and complete information. Scalability is a problem regarding the ontology alignment and it can be settled down by reducing the search space of mapping suggestions. In this paper we propose an automated procedure mainly using different clustering techniques to prune the search space. The main focus of this paper is to evaluate different clustering related techniques to be applied in our system. K-means, Chameleon and Birch have been studied and evaluated, every parameter in these clustering algorithms is studied by doing experiments separately, in order to find the best clustering setting to the ontology clustering problem. Four different similarity assignment methods are researched and analyzed as well. Tfidf vectors and cosine similarity are used to identify the similar clusters in the two ontologies, experiments about threshold of cosine similarity are made to get the most suitable value. Our system successfully builds an automated procedure to generate reduced search space for ontology alignment, on one hand, the result shows that it reduces twenty to ninety times of comparisons that the ontology alignment was supposed to make, the precision goes up as well. On the other hand, it only needs one to two minutes of execution time, meanwhile the recall and f-score only drop down a little bit. The trade- off is acceptable for the ontology alignment system which will take tens of minutes to generate the ontology alignment of the same ontology set. As a result, the large scale ontology alignment becomes more computable and feasible.
477

Large-Scale Land Investments and Land-Use Change / Determinants and Impacts on Rural Development

Sipangule, Kacana 26 January 2018 (has links)
No description available.
478

New Statistical Methods of Single-subject Transcriptome Analysis for Precision Medicine

Li, Qike, Li, Qike January 2017 (has links)
Precision medicine provides targeted treatment for an individual patient based on disease mechanisms, promoting health care. Matched transcriptomes derived from a single subject enable uncovering patient-specific dynamic changes associated with disease status. Yet, conventional statistical methodologies remain largely unavailable for single-subject transcriptome analysis due to the "single-observation" challenge. We hypothesize that, with statistical learning approaches and large-scale inferences, one can learn useful information from single-subject transcriptome data by identifying differentially expressed genes (DEG) / pathways (DEP) between two transcriptomes of an individual. This dissertation is an ensemble of my research work in single-subject transcriptome analytics, including three projects with varying focuses. The first project describes a two-step approach to identify DEPs by employing a parametric Gaussian mixture model followed by Fisher's exact tests. The second project relaxes the parametric assumption and develops a nonparametric algorithm based on k-means, which is more flexible and robust. The third project proposes a novel variance stabilizing framework to transform raw gene counts before identifying DEGs, and the transformation strategically by-passes the challenge of variance estimation in single-subject transcriptome analysis. In this dissertation, I present the main statistical methods and computational algorithms for all the three projects, as well as their real-data applications to personalized treatments.
479

New Optimization Methods for Modern Machine Learning

Reddi, Sashank Jakkam 01 July 2017 (has links)
Modern machine learning systems pose several new statistical, scalability, privacy and ethical challenges. With the advent of massive datasets and increasingly complex tasks, scalability has especially become a critical issue in these systems. In this thesis, we focus on fundamental challenges related to scalability, such as computational and communication efficiency, in modern machine learning applications. The underlying central message of this thesis is that classical statistical thinking leads to highly effective optimization methods for modern big data applications. The first part of the thesis investigates optimization methods for solving large-scale nonconvex Empirical Risk Minimization (ERM) problems. Such problems have surged into prominence, notably through deep learning, and have led to exciting progress. However, our understanding of optimization methods suitable for these problems is still very limited. We develop and analyze a new line of optimization methods for nonconvex ERM problems, based on the principle of variance reduction. We show that our methods exhibit fast convergence to stationary points and improve the state-of-the-art in several nonconvex ERM settings, including nonsmooth and constrained ERM. Using similar principles, we also develop novel optimization methods that provably converge to second-order stationary points. Finally, we show that the key principles behind our methods can be generalized to overcome challenges in other important problems such as Bayesian inference. The second part of the thesis studies two critical aspects of modern distributed machine learning systems — asynchronicity and communication efficiency of optimization methods. We study various asynchronous stochastic algorithms with fast convergence for convex ERM problems and show that these methods achieve near-linear speedups in sparse settings common to machine learning. Another key factor governing the overall performance of a distributed system is its communication efficiency. Traditional optimization algorithms used in machine learning are often ill-suited for distributed environments with high communication cost. To address this issue, we dis- cuss two different paradigms to achieve communication efficiency of algorithms in distributed environments and explore new algorithms with better communication complexity.
480

The impact of agile principles and practices on large-scale software development projects : A multiple-case study of two software development projects at Ericsson / Effekten av agila principer och praxis i storskaliga mjukvaruutvecklingsprojekt

Lagerberg, Lina, Skude, Tor January 2013 (has links)
Agile software development methods are often advertised as a contrast to the traditional, plan-driven approach to software development. The reported and argued benefits on software quality, coordination, productivity and other areas are numerous. The base of empirical evidence to the claimed effects is however thin, and more empirical studies on the effects of agile software development methods in different contexts are needed, especially in large-scale, industrial settings. The purpose of the thesis was to study the impact of using agile principles and practices in large-scale software development projects at Ericsson and it was carried out as a multiple-case study of two projects. One of the projects had implemented a limited number of agile software development practices and was largely plan-driven, while the other project had fully adapted its organization and product design for agile software development. Propositions of possible effects of the use of agile principles and practices in the two projects were generated by a literature review. Empirical data was then collected from online surveys of project members, internal documents, personal contact with key project members and a collection of metrics, to study the presence of the proposed effects. The study was focused on eight different areas: internal software documentation, knowledge sharing, project visibility, pressure and stress, productivity, software quality and project success rate. Agile principles and practices were found to: Lead to a more balanced use of internal software documentation, when supported by sound documentation policies. Contribute to knowledge sharing. Increase project members’ visibility of the status of other teams and the entire project. Increase coordination effectiveness and reducing the need for other types of coordination mechanisms. Increase productivity. Possibly increase software quality. Additionally, the study showed that internal software documentation is important also in agile software development projects, and cannot fully be replaced with face-to-face communication. Further, it was clear that it’s possible to make a partial implementation of agile principles and practices, and still receive a positive impact. Finally, the study showed that it’s feasible to implement agile principles and practices in large-scale software development. It therefore contributes to understanding the effects of agile software development in different contexts.

Page generated in 0.0336 seconds