• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 12
  • 8
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 19
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A GIS-Based Data Model and Tools for Analysis and Visualization of Levee Breaching Using the GSSHA Model

Tran, Hoang Luu 17 March 2011 (has links) (PDF)
Levee breaching is the most frequent and dangerous form of levee failure. A levee breach occurs when floodwater breaks through part of the levee creating an opening for water to flood the protected area. According to National Committee on Levee Safety (NCLS), a reasonable upper limit for damage resulting from levee breaching is around $10 billion per year during 1998 and 2007. This number excludes hurricanes Katrina and Rita in 2005 which resulted in economic damages estimated to be more than $200 billion dollar and a loss of more than 1800 lives. In response to these catastrophic failures, the U.S. Army Corps of Engineers (USACE) started to develop the National Levee Database (NLD) on May 2006. The NLD has a critical role in evaluating the safety of the national levee system. It contains information regarding the attributes of the national levee system. The Levee Analyst Data Model was developed by Dr Norm Jones, Jeff Handy and Thomas Griffiths to supplement the NLD. Levee Analyst is a data model and suite of tools for managing levee information in ArcGIS and exporting the information to Google Earth for enhanced visualization. The current Levee Analyst has a concise and expandable structure for managing, archiving and analyzing large amounts of levee seepage and slope stability data. (Thomas 2009). The new set of tools developed in this research extends the ability of the Levee Analyst Data Model to analyze and mange levee breach simulations and store them in the NLD geodatabase. The capabilities and compatibilities with the NLD of the new geoprocessing tools are demonstrated in the case study. The feasibility of using GSSHA model to simulate flooding is also demonstrated in this research.
52

GIS-based crisis communication : A platform for authorities to communicate with the public during wildfire / GIS-baserad kriskommunikation : En plattform för kommunikation mellan myndigheter och allmänheten vid skogsbrand

Althén Bergman, Felix, Östblom, Evelina January 2019 (has links)
Today, people are used to having technology as a constant aid. This also sets expectations that information should always be available. This, together with ongoing climate change that has led to more natural disasters, has laid the foundation for the need to change the methodology for how geographical data is collected, compiled and visualized when used for crisis communication. This study explores how authorities, at present, communicate with the public during a crisis and how this can be done in an easier and more comprehensible way, with the help of Geographical Information Systems (GIS). The goal is to present a new way of collecting, compiling and visualizing geographical data in order to communicate, as an authority, with the public during a crisis. This has been done using a case study with focus on wildfires. Therefore, most of the work consisted of the creation of a prototype, CMAP – Crisis Management and Planning, that visualizes fire-related data. The basic work of the prototype consisted of determining what data that exists and is necessary for the information to be complete and easily understood together with how the data is best implemented. The existing data was retrieved online or via a scheduled API request. Eventrelated data, which is often created in connection with the event itself, was given a common structure and an automatic implementation into the prototype using Google Fusion Tables. In the prototype, data was visualized in two interactive map-based sections. These sections focused on providing the user with the information that might be needed if one fears that they are within an affected location or providing the user with general preparatory information in different counties. Finally, a non-map-based section was created that allowed the public to help authorities and each other via crowdsource data. This was collected in a digital form which was then directly visualized in the prototype’s map-based sections. The result of this showed, among other things, that automatic data flows are a good alternative for avoiding manual data handling and thus enabling a more frequent update of the data. Furthermore, it also showed the importance of having a common structure for which data to be included and collected in order to create a communication platform. Finally, by visualizing of dynamic polygon data in an interactive environment a development in crisis communication that can benefit the public’s understanding of the situation is achieved. This thesis is limited to the functionality and layout provided by the Google platform, including Google Earth Engine, Google Forms, Google Fusion Tables etc / I dagens samhälle är människan van vid teknik som ett ständigt hjälpmedel. Detta sätter också förväntningar på att information alltid ska vara tillgänglig och uppdaterad. Detta tillsammans med pågående klimatförändringar som lett till fler och svårare naturkatastrofer har lagt grunden till att det finns ett behov av att förändra hur man samlar in, sammanställer och visualiserar geografiska data som används för kommunikation i en krissituation. Denna studie utforskar hur myndigheter, i dagsläget, kommunicerar med allmänheten vid en krissituation och hur detta kan göras på ett enklare och mer givande sätt med hjälp av GIS. Målet är att visa ett nytt sätt att samla in, sammanställa och visualisera geografiska data för att, som myndighet, kommunicera med allmänheten under en kris. Detta har gjorts som i en fallstudie med fokus på skogs- och gräsbränder. Merparten av arbetet bestod därför av framtagande av en prototyp, CMAP – Crisis Management and Planning som visualiserar brandrelaterade data. Grundarbetet till prototypen bestod av att fastställa vilken data som finns och är nödvändig för att informationen skulle bli lättförstådd och komplett samt hur denna bäst implementeras. Den existerande data som implementerades hämtades online eller via ett schemalagt anrop av APIer. Händelserelaterade data skapas ofta i samband med själva händelsen och därför skapades en gemensam struktur och direktimplementation till prototypen för denna data med hjälp av Google Fusion Tables. I prototypen visualiserades data i två interaktiva kartbaserade sektioner. Dessa sektioner fokuserade kring att förse användaren med den information som kan behövas om man befarar att man befinner sig på en drabbad plats eller att förse användaren med allmän förberedande information inom olika län. Slutligen skapades även en icke kartbaserad sektion som möjliggjorde att allmänheten kan hjälpa myndigheter och varandra genom ”crowdsource” data. Denna samlades in i ett digitalt formulär som sedan direkt visualiserades i prototypens kartbaserade delar. Resultatet av detta visade bland annat att automatiska dataflöden är ett bra alternativ för att slippa manuell hantering av data och därmed möjliggöra en mer frekvent uppdatering. Vidare visade det även på vikten av att ha en gemensam struktur för vilken data som ska inkluderas och samlas in för att skapa en kommunikationsplattform. Slutligen är visualisering av dynamiska polygondata i en interaktiv miljö en utveckling av kriskommunikation som kan gynna förståelsen för situationen hos allmänheten. Studien är begränsad till att skapa en plattform baserad på den inbyggda funktionaliteten och designen som erbjuds i Googles plattform, detta inkluderat Google Earth Engine, Google Formulär, Google Fusion Tables etc.
53

Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing Environment

Stromann, Oliver January 2018 (has links)
Mapping the Earth’s surface and its rapid changes with remotely sensed data is a crucial tool to un-derstand the impact of an increasingly urban world population on the environment. However, the impressive amount of freely available Copernicus data is only marginally exploited in common clas-sifications. One of the reasons is that measuring the properties of training samples, the so-called ‘fea-tures’, is costly and tedious. Furthermore, handling large feature sets is not easy in most image clas-sification software. This often leads to the manual choice of few, allegedly promising features. In this Master’s thesis degree project, I use the computational power of Google Earth Engine and Google Cloud Platform to generate an oversized feature set in which I explore feature importance and analyse the influence of dimensionality reduction methods. I use Support Vector Machines (SVMs) for object-based classification of satellite images - a commonly used method. A large feature set is evaluated to find the most relevant features to discriminate the classes and thereby contribute most to high clas-sification accuracy. In doing so, one can bypass the sensitive knowledge-based but sometimes arbi-trary selection of input features.Two kinds of dimensionality reduction methods are investigated. The feature extraction methods, Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA), which transform the original feature space into a projected space of lower dimensionality. And the filter-based feature selection methods, chi-squared test, mutual information and Fisher-criterion, which rank and filter the features according to a chosen statistic. I compare these methods against the default SVM in terms of classification accuracy and computational performance. The classification accuracy is measured in overall accuracy, prediction stability, inter-rater agreement and the sensitivity to training set sizes. The computational performance is measured in the decrease in training and prediction times and the compression factor of the input data. I conclude on the best performing classifier with the most effec-tive feature set based on this analysis.In a case study of mapping urban land cover in Stockholm, Sweden, based on multitemporal stacks of Sentinel-1 and Sentinel-2 imagery, I demonstrate the integration of Google Earth Engine and Google Cloud Platform for an optimised supervised land cover classification. I use dimensionality reduction methods provided in the open source scikit-learn library and show how they can improve classification accuracy and reduce the data load. At the same time, this project gives an indication of how the exploitation of big earth observation data can be approached in a cloud computing environ-ment.The preliminary results highlighted the effectiveness and necessity of dimensionality reduction methods but also strengthened the need for inter-comparable object-based land cover classification benchmarks to fully assess the quality of the derived products. To facilitate this need and encourage further research, I plan to publish the datasets (i.e. imagery, training and test data) and provide access to the developed Google Earth Engine and Python scripts as Free and Open Source Software (FOSS). / Kartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
54

Monitoring of cover cropping practices and their impacts on agricultural productivity and water quality in the Maumee River watershed using remote sensing

KC, Kushal January 2021 (has links)
No description available.
55

A remote sensing driven geospatial approach to regional crop growth and yield modeling

Shammi, Sadia Alam 06 August 2021 (has links)
Agriculture and food security are interlinked. New technologies and instruments are making the agricultural system easy to operate and increasing the food production. Remote sensing technology is widely used as a non-destructive method for crop growth monitoring, climate analysis, and forecasting crop yield. The objectives of this study are to (1) monitor crop growth remotely, (2) identify climate impacts on crop yield, and (3) forecasting crop yield. This study proposed methods to improve crop growth monitoring and yield predictions by using remote sensing technology. In this study, we developed crop vegetative growth metrics (VGM) from the MODIS (Moderate Resolution Imaging Spectroradiometer) 250m NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) data. We developed 19 NDVI and EVI based VGM metrics for soybean crop from a time series of 2000 to 2018, but the methods are applicable to other crops as well. We found VGMmax, VGM70, VGM85, VGM98T are about 95% crop yield predictable. However, these metrics are independent of climatic events. We modelled the climatic impacts on soybean crop from the time series data from1980-2019 collected from NOAA's National Climatic Data Center (NCDC). Therefore, we estimated the impacts of increase and decrease of temperature (maximum, mean, and minimum) and precipitation (average) pattern on crop yields which will be helpful to monitor climate change impacts on crop production. Lastly, we made crop yield forecasting statistical model across different climatic regions in USA using Google Earth Engine. We used remotely sensed MODIS Terra surface reflectance 8-day global 250m data to calculate VGM metrics (e.g. VGM70, VGM85, VGM98T, VGM120, VGMmean, and VGMmax), MODIS Terra land surface temperature and Emissivity 8-Day data for average day-time and night-time temperature and CHIRPS (Climate Hazards Group Infra-red Precipitation with station data) data for precipitation, from a time series data of 2000-2019. Our predicted models showed a NMPE (Normalized Mean Prediction error) with in a range of -0.002 to 0.007. These models will be helpful to get an overall estimate of crop production and aid in national agricultural strategic planning. Overall, this study will benefit farmers, researchers, and management system of U.S. Department of Agriculture (USDA).
56

Utilização do Google Earth como plataforma para delimitação de Áreas de Preservação Permanente (APP?s): um estudo de caso no município de São Leopoldo

Oliveira, Marcelo Zagonel de 26 March 2009 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-03-31T13:37:10Z No. of bitstreams: 1 Marcelo Zagonel de Oliveira.pdf: 3464278 bytes, checksum: 51b992ea821dfdae2b25c24fe5f9dc04 (MD5) / Made available in DSpace on 2015-03-31T13:37:10Z (GMT). No. of bitstreams: 1 Marcelo Zagonel de Oliveira.pdf: 3464278 bytes, checksum: 51b992ea821dfdae2b25c24fe5f9dc04 (MD5) Previous issue date: 2009-01-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / PROSUP - Programa de Suporte à Pós-Gradução de Instituições de Ensino Particulares / Atualmente nossa sociedade vive uma mudança de paradigma buscando a sustentabilidade. Através do desenvolvimento de novas tecnologias associadas ao geoprocessamento têm sido possível dimensionar problemas ambientais de forma muito mais precisa. O Google Earth disponibiliza gratuitamente imagens de satélites para as pessoas que têm acesso à internet. Para muitos locais essas imagens são de alta resolução e passíveis de serem utilizadas em muitas atividades de planejamento urbano e ambiental. Assim o principal objetivo desse trabalho foi por meio da utilização de imagens de alta resolução disponibilizadas gratuitamente pelo Google Earth e com o auxílio de um Sistema de Informação Geográfica (SIG), analisar a viabilidade de utilização das mesmas na definição das Áreas de Proteção Permanente (APPs) do Município de São Leopoldo/RS. Ainda como objetivos secundários são apresentados um método de estruturação de mosaicos advindos de cenas capturadas no Google Earth, verificação do Padrão de Exatidão Cartográfico (PEC) do mosaico e a proposta de um Sistema de Informação Geográfica (SIG) de baixo custo com a finalidade de auxiliar o gerenciamento ambiental de municípios de pequeno e médio porte. Como resultados obtidos através de testes estatísticos aplicados para analisar a qualidade da imagem georreferenciada e de acordo com a classificação do Decreto Lei 89817 ? Padrão de Exatidão Cartográfico concluiu-se que a imagem do Google Earth elaborada através de cenas capturadas a 5900 metros de altitude pode ser enquadrada em uma classe B e escala 1/15.000. Toda essa base cartográfica serviu como referência para geração das APPs, tais como: ao redor de nascentes, ao longo de cursos d´água, áreas úmidas e matas nativas, perfazendo as seguintes percentagens de ocupação territorial: 9,90%, 11,11%, 13% e 17%, respectivamente. Em termos de conclusões a pesquisa mostrou que imagens de satélites de alta resolução do Google Earth (associada a uma rede de pontos GPS) podem ser utilizadas de forma eficiente no aspecto de uma localização e quantificação mais precisa das APPs. Os produtos gerados por este estudo associados com a planta cadastral do município passam a desempenhar ferramentas importantes e de baixo custo para um planejamento integrado das diversas atividades desenvolvidas nas secretarias do município. / Nowadays our society experience a change of paradigm seeking for sustainability. Through the development of new technologies associated with GIS have been possible to measure environmental problems much more precise. Google Earth provides satellite images for free for those with internet access. In many places these images are high resolution and capable of being used in many activities of urban and environmental planning. In this manner, the main objective of this work was through the use of high-resolution images available for free at Google Earth and using a Geographic Information System (GIS) to examine the feasibility of using the same definition of Areas of Permanent Protection (APPs) of the municipal district of São Leopoldo / RS. Although secondary objectives are presented as a method of structuring the resulting mosaic of scenes captured on Google Earth, check the Cartographic Accuracy Standard (PEC) of the mosaic and the proposal of a Geographic Information System (GIS) for low cost in order to help the environmental management of cities, small and medium businesses. As results from statistical tests applied to analyze the image quality and georeferenced according to the classification of Decree Law 89817 - Cartographic Accuracy Standard concluded that the image of Google Earth developed through scenes captured the 5900 meters in altitude may be framed in a class B and scale 1/15.000. All this served as a base map reference for generation of APPs, such as around fountainheads, along the water courses, wetlands and native forests, for the following percentages of territorial occupation: 9.90%, 11.11 %, 13% and 17% respectively. In terms of the research findings showed that the satellite images with high resolution of Google Earth (associated with a network of GPS points) can be used efficiently in the appearance of a more precise localization and quantification of APPs. The products generated by this study associated with the cadastral plan of the city will appear as a important toll with low cost for an integrated planning of activities in various departments of the municipal district.
57

The student's experience of multimodal assignments : play, learning, and visual thinking

Nahas, Lauren Mitchell 30 January 2013 (has links)
Much of current pedagogical discussion of the use of multimodal assignments in the writing classroom argues that one benefit of such assignments is that they foster student engagement, innovation, and creativity while simultaneously teaching writing and argumentation concepts. Although such discussions rarely use the term “play,” play theorists consider engagement, innovation, creativity, and learning to be central characteristics and outcomes of play. Thus, what many scholars view as a major outcome of multimodal assignments might most accurately be described as playful learning. In order to investigate the validity of claims that playful learning is a product of multimodal assignments, this dissertation reports on the results of a comparative case study of four different classrooms that used multimodal assignments. The objective of the study was to better understand the students’ experience of these assignments because the students’ perspective is only represented anecdotally in the literature. The study’s research questions asked: Do students find these assignments to be playful, creative, or engaging experiences? Do they view these assignments as related to and supportive of the more traditional goals of the course? And what role does the visual nature of these technologies have in the student’s experience of using them or in their pedagogical effectiveness? Each case was composed of a different writing course, a different assignment, and a different multimodal computer technology. The results of the study show that students generally did find these assignments both enjoyable and useful in terms of the learning goals of the course. Many students even went so far as to describe them as fun, indicating that for some students these were playful experiences in the traditional sense. However, comparison of the results of each case illustrates that the simple injection of a multimodal assignment into the classroom will not necessarily create a playful learning experience for students. The students’ experience is a complex phenomenon that is impacted by the structure of the assignment, whether or not they are provided a space for exploration and experimentation, their attitude towards the technology, and the characteristics of the technology. / text
58

Επιπτώσεις και αποτελέσματα από την ανθρώπινη παρέμβαση στις μορφογενετικές διεργασίες στον κάτω ρου τού Αλφειού ποταμού

Αλεβίζος, Γιώργος 08 July 2011 (has links)
Η παρούσα εργασία πραγματοποιήθηκε στον Τομέα Γενικής Θαλάσσιας Γεωλογίας και Γεωδυναμικής του Τμήματος Γεωλογίας του Πανεπιστημίου Πατρών. Στο πλαίσιο αυτής, μελετήθηκε ο κάτω ρους του Αλφειού ποταμού και η μορφολογική του εξέλιξη, σε συνδυασμό με τις ανθρώπινες παρεμβάσεις. Σκοπός της εργασίας αυτής είναι η παρουσίαση, η ανάλυση, ο σχολιασμός και η αξιολόγηση των αλλαγών στο που έχει υποστεί ο κάτω ρους του ποταμού. Η εν λόγω περιοχή παρουσιάζει αξιόλογο ενδιαφέρον λόγω των ιδιαίτερων και σύνθετων μορφοκλιματικών συνθηκών που επικρατούν, των τεχνικών έργων της περιοχής και γενικά των επιδράσεων που είναι αποτέλεσμα κυρίως της ολοένα και αυξανόμενης ανθρώπινης παρέμβασης. / This diploma thesis was conducted at the General Department of Marine Geology and Geodynamics, Department of Geology, University of Patras. As part of this work, it was studied the lower weight of the Alfeios River and the morphological evolution, combined with human intervention. The purpose of this work is the presentation, analysis, commentary and evaluation of changes on the lower watercourse of Alfeios river. This area presents considerable interest because of the special and complex morphocimate conditions,the technical projects in the area and generally the effects which are mainly the result of ever-increasing human intervention.
59

Návrhy využití geografických informačních systémů v hodinách zeměpisu na základních a středních školách

NERAD, Jiří January 2018 (has links)
This diploma thesis deals with the practical integration of Geographic Information Systems (GIS) into geography classes at elementary and secondary schools. Suggested suggestions on how to use GIS in the educational process precede the theoretical starting point of the work. They deal with the wider context of the position of geoinformatics and the level of geoinformatics literacy in the Czech Republic. Sub-chapters about the structure of geoinformatic literacy are also included. The theoretical background continues with the topic of anchoring GIS in national education papers and ends with a chapter about GIS interdisciplinarity and its potential for use in teaching. The methodological part describes the structure of the questionnaire, the methodology of selection of addressed schools and the actual course of the questionnaire survey on the rate of use of GIS in geography classes at elementary and secondary schools. Next, the methodology of selecting freeware GIS programs described and used in the practical part of the thesis. The practical part is divided into three main subchapters reflecting the aims of the thesis. The first one deals with the results of the questionnaire survey. The second evaluation of available GIS programs and their potential for implementation in education. The last subchapter is the main part of the thesis and it deals with suggestions of using GIS in geography lessons at elementary and secondary schools. It includes three learning activities. The first is dedicated to creating own map and second to geographic location. Both of these activities with selected GIS were implemented in geography classes at primary school. The last part presents suggestions on how to use GIS to learn about current topics and issues.
60

Ensino de geografia: utilização de recursos computacionais (Google Earth) no ensino médio

Bonini, André Marciel [UNESP] 30 April 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:33:20Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-04-30Bitstream added on 2014-06-13T19:23:05Z : No. of bitstreams: 1 bonini_am_dr_rcla.pdf: 5693973 bytes, checksum: eeef5620bad4ee01311217a13dcbec5c (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O aluno da atualidade convive com a massificação do uso da Internet e a perspectiva da participação cada vez mais maciça dos micro computadores no ambiente doméstico. Com isso, verifica-se que a Internet, o micro e os softwares educacionais, combinados, abrem para milhares de estudantes possibilidades inesgotáveis de aprendizado. O objetivo geral deste trabalho é desenvolver abordagens metodologias para o ensino de Geografia com a utilização de recursos computacionais, além de proporcionar aos alunos de nível médio a aprendizagem de conceitos geográficos visando uma educação de maior qualidade. Desenvolver habilidades de utilização de sistemas computacionais de modo a incluir tecnologias no cotidiano do aluno, com o intuito do estudo; Avaliar o uso de novas tecnologias na educação como recurso didático, tendo como o exemplo o sistema Google Earth. O desenvolvimento deste trabalho baseou-se em uma Análise Comparativa de Caráter Experimental e Indutivo, uma vez que os resultados podem ser generalizados. O Ensino foi fundamentado no Construtivismo, buscando uma aprendizagem centrada no aluno, significativa e em alguns casos baseada em problemas do cotidiano local, regional ou global. A avaliação pautou-se no modelo somativo e formativo, com os dados da pesquisa sendo coletados de forma qualitativa (Observação dos Participantes com a elaboração de relatórios com a avaliação pessoal do professor acerca da evolução do processo de ensino aprendizagem dos alunos) e quantitativa. Analisando-se os resultados obtidos pode-se considerar que a utilização de novas tecnologias pode colaborar com o processo de ensino aprendizagem, com ressalvas abordadas no caso especifico deste trabalho. / The pupil of the present time coexists with the massification of the use of the Internet and the perspective of the participation each more massive time of the micron computers in the domestic environment. With this, he verifies yourself that the educational Internet, micron and softwares, agreed, open for thousand of students inexhaustible possibilities of learning. The general objective of this work is to develop boardings methodologies for the education of Geography with the use of computational resources, beyond providing to the pupils of average level the learning of geographic concepts aiming at an education of bigger quality. To develop abilities of use of computational systems in order to include technologies in the daily one of the pupil, with the intention of the study; To evaluate the use of new technologies in the education as didactic resource, having as the example the Google Earth system. The development of this work was based on a Comparative Analysis of Experimental and Inductive Character, a time that the results can be generalized. Ensino was based on the Construtivismo, having searched a learning centered in the pupil, significant and in some cases based in problems of the daily place, regional or global. The evaluation was pautou in the somativo and formative model, with the collected data of the research being of qualitative form (Comment of the Participants with the elaboration of reports with the personal evaluation of the professor concerning the evolution of the education process learning of the pupils) and quantitative. Analyzing the gotten results it can be considered that the use of new technologies can collaborate with the education process learning, with boarded exceptions in the case I specify of this work

Page generated in 0.0421 seconds