• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • 1
  • Tagged with
  • 16
  • 8
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of asphalt removing tool for a tandem roller / Utveckling av ett verktyg för att ta bort asfalt på tandemvältar

Thorwaldsson, Henrik January 2014 (has links)
This master thesis have been done to come up with different concepts that could solve problems that is connected to vibrations in tandem rollers. The main problem is that the vibrations makes it harder to remove asphalt with the built in scraper, creating an uneven contact with the drum and the scraper. The new concepts should improve the machines ability to remove asphalt and decrease the amount of maintenance that is needed. To understand what the new tool needs to do some functional analyses have been done. To create new concepts the triz method have been used. The different concept was evaluated with a pugh matrix and swot. In the final part the focus is on how the best concept could become better. The final concept locks the scraper geometrically so it moves the same way as the drum. This makes it so it always are at the same distance to the drum. / Den här uppsatsen handlar om olika koncept som kan lösa problem som är kopplade till vibrationer från tandem vältar. Huvudproblemet som har varit i fokus är att vibrationerna från valsen försvårar möjligheten att ta bort asfalt med den inbyggda skrapan. Detta ger en ojämn kontakt mellan skrapan och valsen. De nya koncepten borde förbättra maskinen förmåga att ta bort asfalt och minska det nödvändiga underhållet. För att förstå vad det nya verktyget behöver göra har det gjort funktionsanalyser. För att genera nya koncepts har verktyget triz använts. De olika konceptet har utvärderats med en pugh matris och swot metoden. Sista delen av arbetet handlar om hur den bästa konceptet kan bli bättre. The slutgiltiga förslaget går ut på att man låser skrapa geometriskt så att den rör sig på samma sätt som valsen. Detta gör att den alltid håller sig på samma avstånd till valsen.
2

An advisory system for scraper selection

Mayfield, John Charles 29 August 2005 (has links)
Scrapers are useful construction equipment when hauling distances range between 500 to 3000 feet. When preparing an estimate for an earthmoving project utilizing scrapers, the capacity of the scraper and the cycle time for the given project conditions must be calculated. Since travel time varies widely based on the conditions of the haul road and the performance of the equipment, determining the most economical selection (size and model) and the correct number of scrapers and pushers is a rather tedious process. The calculation of travel time between the cut and fill zone involves repetitive calculations. A spreadsheet-based interactive advisory system was created in order to facilitate these calculations and generate a list of recommended equipment. The system contains a scrapers database, performance charts, soil properties, and a user interface to solicit data that is specific to the project such as haul road surface conditions and characteristics. Data such as efficiency (minutes worked per hour) and hourly rates for operators and other workers can also be specified in the user interface. Once the user enters the quantity to be moved the application calculates the production rate, time required for the job, and the estimated unit cost for each scraper in the database. The system then produces a list of all scrapers, sorted in the order of shortest time or lowest unit price.
3

An advisory system for scraper selection

Mayfield, John Charles 29 August 2005 (has links)
Scrapers are useful construction equipment when hauling distances range between 500 to 3000 feet. When preparing an estimate for an earthmoving project utilizing scrapers, the capacity of the scraper and the cycle time for the given project conditions must be calculated. Since travel time varies widely based on the conditions of the haul road and the performance of the equipment, determining the most economical selection (size and model) and the correct number of scrapers and pushers is a rather tedious process. The calculation of travel time between the cut and fill zone involves repetitive calculations. A spreadsheet-based interactive advisory system was created in order to facilitate these calculations and generate a list of recommended equipment. The system contains a scrapers database, performance charts, soil properties, and a user interface to solicit data that is specific to the project such as haul road surface conditions and characteristics. Data such as efficiency (minutes worked per hour) and hourly rates for operators and other workers can also be specified in the user interface. Once the user enters the quantity to be moved the application calculates the production rate, time required for the job, and the estimated unit cost for each scraper in the database. The system then produces a list of all scrapers, sorted in the order of shortest time or lowest unit price.
4

Automatizované vyhledávání a uchovávání recenzí o produktech

Voráč, Tomáš January 2019 (has links)
The diploma thesis deals with the problem of automated searching for reviews on web pages and also the saving of found reviews. In this work are described in detail possibilities of storing unstructured data and subsequent selection of the most suitable storage. The main part of the work deals with the analysis of HTML structure, so that it is possible to find the required information on the website. This work also deals with ways to determine the similarity of text strings in order to determine what product the review found belongs to. The Python programming language was used for implementation.
5

A Web Scraper For Forums : Navigation and text extraction methods

Palma, Michael, Zhou, Shidi January 2017 (has links)
Web forums are a popular way of exchanging information and discussing various topics. These websites usually have a special structure, divided into boards, threads and posts. Although the structure might be consistent across forums, the layout of each forum is different. The way a web forum presents the user posts is also very different from how a news website presents a single piece of information. All of this makes the navigation and extraction of text a hard task for web scrapers. The focus of this thesis is the development of a web scraper specialized in forums. Three different methods for text extraction are implemented and tested before choosing the most appropriate method for the task. The methods are Word Count, Text-Detection Framework and Text-to-Tag Ratio. The handling of link duplicates is also considered and solved by implementing a multi-layer bloom filter. The thesis is conducted applying a qualitative methodology. The results indicate that the Text-to-Tag Ratio has the best overall performance and gives the most desirable result in web forums. Thus, this was the selected methods to keep on the final version of the web scraper. / Webforum är ett populärt sätt att utbyta information och diskutera olika ämnen. Dessa webbplatser har vanligtvis en särskild struktur, uppdelad i startsida, trådar och inlägg. Även om strukturen kan vara konsekvent bland olika forum är layouten av varje forum annorlunda. Det sätt på vilket ett webbforum presenterar användarinläggen är också väldigt annorlunda än hur en nyhet webbplats presenterar en enda informationsinlägg. Allt detta gör navigering och extrahering av text en svår uppgift för webbskrapor. Fokuset av detta examensarbete är utvecklingen av en webbskrapa specialiserad på forum. Tre olika metoder för textutvinning implementeras och testas innan man väljer den lämpligaste metoden för uppgiften. Metoderna är Word Count, Text Detection Framework och Text-to-Tag Ratio. Hanteringen av länk dubbleringar noga övervägd och löses genom att implementera ett flerlagers bloom filter. Examensarbetet genomförs med tillämpning av en kvalitativ metodik. Resultaten indikerar att Text-to-Tag Ratio har den bästa övergripande prestandan och ger det mest önskvärda resultatet i webbforum. Således var detta den valda metoden att behålla i den slutliga versionen av webbskrapan.
6

Developing a Python based web scraper : A study on the development of a web scraper for TimeEdit

Andersson, Pontus January 2021 (has links)
I en värld där alltmer information lagras på internet är det svårt för en vanlig användare att hänga med. Även när informationen finns tillgänglig på en och samma hemsida kan den hemsidan sakna funktioner eller vara svår att läsa av. Idén bakom att skrapa hemsidor, tidningar eller spel på information är inte ny och detta examensarbete fokuserar på att bygga en web scraper med tillhörande hemsida där användare kan ladda upp sitt schema skrapat från TimeEdit. Hemsidan ska sedan presentera denna skrapade data på ett visuellt tilltalande sett. När system är färdigutvecklade utvärderas dem för att se om examensarbetets mål har uppnåtts samt om systemen har förbättrat det befintliga sättet att hantera schemaläggning i TimeEdit hos lärare och studenter. I sammanfattningen finns sedan framtida forskning och arbeten presenterat. / The concept of scraping the web is not new, however, with modern programming languages it is possible to build web scrapers that can collect unstructured data and save this in a structured way. TimeEdit, a scheduling platform used by Mid Sweden University, has no feasible way to count how many hours has been scheduled at any given week to a specific course, student, or professor. The goal of this thesis is to build a python-based web scraper that collects data from TimeEdit and saves this in a structured manner. Users can then upload this text file to a dynamic website where it is extracted from the file and saved into a predetermined database and unique to that user. The user can then get this data presented in a fast, efficient, and user-friendly way. This platform is developed and evaluated with the resulting platform being a good and fast way to scan a TimeEdit schedule and evaluate the extracted data. With the platform built future work is recommended to make it a finishes product ready for live use by all types of users.
7

Automating the extraction of Financial data

Rollino, Nicolas, Ali, Rakin January 2022 (has links)
It is hard for retail investors and data providing companies to attain financial data of European companies. The work of extracting financial data of European companies is most likely done manually, which is a time-consuming process. This would explain why European companies’ data is supplied slower than American companies. This thesis attempts to see if it is possible to automatise the process of extracting financial data of European companies by creating two proof of concept systems. One focuses on collecting financial reports of European companies using a web scraper and directly scrapes the reports from the source. The other system extracts financial data from the reports using Amazon Web Services(AWS), specifically the text extraction tool called Textract. The system that collects financial reports from companies could not be automated and did not meet the expectations set by the company that commissioned the thesis. The system that extracts financial data from companies was promising as all data points of interest could be extracted. The second system was deemed promising however since it is reliant on a system that supplies it with reports, it cannot be implemented.The work conducted shows that automating the process of extracting financial data from European companies is not (yet) possible. Extracting the data from reports is possible however collecting the report is the bottleneck which is not possible. It would have been better to manually collect financial reports instead of using a web scraper in this thesis. This was a bottleneck which could be solved in future projects. / Det svårt för privata investerare och företag som tillhandahåller data att få tillgång till finansiella data om europeiska företag. Uppgiften att extrahera finansiella data från europeiska företag sker med största sannolikhet manuellt, vilket är en tidskrävande process. Detta skulle förklara varför europeiska företags finansiella data levereras långsammare än amerikanska företag. Denna rapport försöker testa ifall det är möjligt att automatisera processen att extrahera finansiella data för europeiska företag genom att skapa två proof of concept-system. En fokuserar på att samla in finansiella rapporter från europeiska företag som använder en webbskrapa och skrapar rapporterna direkt från källan. Det andra systemet extraherar finansiella data från rapporterna med hjälp av Amazon Web Services(AWS), specifikt verktyget som extraherar text, även kallad Textract. Systemet som samlar in finansiella rapporter från företag kunde inte automatiseras och motsvarade inte de förväntningar som ställts av företaget som föreslog examensarbetet. Systemet som extraherar finansiella data från företag var lovande eftersom alla eftertraktade datapunkter kunde extraheras. Det andra systemet ansågs lovande men eftersom det är beroende av ett system som förser det med rapporter kan det inte implementeras. Arbetet som utförts visar att det ännu inte är möjligt att automatisera processen att extrahera finansiell data från europeiska företag. Det är möjligt att extrahera data från rapporter men att samla in rapporten är flaskhalsen som inte är möjlig. Det hade varit bättre att manuellt samla in finansiella rapporter istället i denna avhandling. Detta var en flaskhals som skulle kunna lösas i framtida projekt.
8

A Morphological Analysis of End Scrapers at Nobles Pond (33ST357), A Gainey Phase Paleoindian Site in Northeast Ohio

Comstock, Aaron R. 18 July 2011 (has links)
No description available.
9

Analyses tracéologiques pour l'interprétation de la fonction et de l'utilisation d'industries lithiques en quartz du Paléolithique coréen / Use-wear analysis for the interpretation of the function and utilization of quartz stone tools of the Paleolithic in Korea

Kim, Kyung-Jin 19 June 2017 (has links)
En Corée, les quartzs sont utilisés intensivement parmi les matières premières au cours de toutes les périodes du Paléolithique. Nous souhaitons vérifier les caractéristiques ou les différences des fonctions d’outils en quartz selon les sites paléolithiques de périodes différentes et de diverses régions par des analyses fonctionnelles. Nous avons également étudié les variations dans l’utilisation d’un type d’outil particulier en fonction des sites : nous avons sélectionné le grattoir. Par les résultats de l'analyse fonctionnelle, des différences d’utilisations des outils en quartz entre les niveaux ne sont pas attestées, il existe une continuité dans l’utilisation au fil des niveaux. Cependant, les utilisations de l’outil en quartz sont légèrement différentes entre les sites analysés. La plupart des grattoirs ont été utilisés dans le travail de la peau. Cependant, nous pouvons observer des traces formées par une variété de tâches sur les grattoirs grands et épais. Autrement dit, nous pouvons attester que l'on utilise les grattoirs différemment en fonction de leurs dimensions. Sur la base de ce travail, d’autres études devraient être effectuées afin de trouver une réponse plus précise sur l’absence de changement entre les niveaux de nos sites d’étude. / It is necessary to examine quartz tools for understanding the Palaeolithic culture in Korea. It is difficult to identify the temporal change pattern of raw materials, assemblages and production techniques of stone tools found in the Palaeolithic sites in Korea ranging from the early to late Palaeolithic, because most stones tools uncovered from these sites are made of quartz. Therefore, this research raises two questions: Is it be possible that the use-wear analysis could reveal the functional change patterns of stone tools in accordance with production dates and environmental changes? How it can be detected the peculiar characteristics of the particular tools found in each site? The results of the use-wear analyses to each cultural layer show no clear temporal differences of raw materials, assemblages and uses of lithic tools. The sites where the excavated tools were mainly composed of quartz exhibited the successive patterns rather than the particular changes between cultural layers. The use-wear analysis suggest that most end-scrapers were used for processing hides. However, it could be observed use-wear pattern formed by processing woods and antlers in large-sized end-scrapers. Therefore, it can be inferred that the use of end-scrapers depended on the size of these tools; but it seems that small-sized end-scrapers produced in the late Palaeolithic were closely related to work hide. In order to seek more clear answers to this result, it will be needed to conduct analyses on many Palaeolithic sites dating to the transitional era distributed in different regions and on the sites yielding tools made of different raw materials. / 선사고고학은 기본적으로 선사인들의 생활과 환경에 대해 이해하기 위한 학문으로, 석기를 왜 만들고, 어떻게 사용했는가 ? 라는 기초적인 질문을 가진다. 다양한 연구 방법 가운데 석기에 남겨진 흔적을 연구하는‘쓴자국 분석’을 바탕으로 이에 대한 해답을 찾고자 한다. 한국에서는 시대와 지역이 다른 대다수의 구석기 유적에서 석영 석기가 높은 비율로 출토된다. 그렇기 때문에 한국의 구석기 문화에 대한 이해를 하기 위해서는 반드시 석영 석기에 대한 이해가 필수적이다. 이번 연구에서는 석영 석기에 대한 쓴자국 분석의 가능성을 확인하고 이를 통해 한국 구석기 유적에서 출토되는 석영석기의 기능과 시기와 환경에 따른 차이를 확인하고자 한다. 그리고 석영석기가 주를 이루는 한국의 구석기 유적에서는 구석기시대 전기에서 후기에 이르기까지 돌감, 석기의 구성, 제작기술 등에 있어 시기적인 변화와 특징을 확인하기 어렵기 때문에 쓴자국 분석을 통해 시기, 환경에 따른 석기의 기능에 있어서 차이 또는 특징들을 확인할 수 없는가? 그리고 유적에 따른 특정 도구의 사용에 차이 또는 특징들이 있는가? 라는 질문으로 시작한다. 그리고 석영석기에 대한 이해를 위해 다양한 실험을 통해 여러 작업재료에 따른 흔적들의 특징을 이해한다.먼저 여러 유물층이 확인되는 김포 풍곡리, 동탄 오산리와 청계리 구석기 유적의 석기들에 대해 분석 및 비교를 하였다. 그 결과 시간의 흐름에 따른 유물층에서는 사용되는 돌감, 석기구성과 사용 등에 있어 각 문화층에 따른 특징적인 차이를 명확하게 확인할 수 없었다. 쓴자국 분석을 통해 차이를 확인하고자 하였으나 석영 석기를 주로 사용하는 유적에서는 유적내 유물층 사이에서는 특별한 변화보다는 연속적인 양상을 볼 수 있었다. 도구와 유적의 기능과의 관계를 이해하기 위해 유적에서 출토되는 다양한 석기 가운데 지역과 시기에 상관없이 주로 출토되는 석기인 밀개에 대한 분석을 위해 7개 유적에서 출토된 밀개를 분석, 비교를 하였다. 그 결과 대부분의 밀개들이 가죽작업에 특징적으로 사용되었던 도구였을 것으로 짐작할 수 있다. 다만 크기가 큰 밀개의 경우 나무, 뿔 등 다양한 작업에 의해 형성된 흔적들을 볼 수 있었다. 즉, 밀개는 도구의 크기에 따라 다르게 사용이 이루어진 것으로 볼 수 있으나 후기 구석기 시대의 작은 크기의 밀개는 가죽 작업과 밀접한 관계가 있는 것으로 보인다. 일부 분석 유적들은 시기적으로는 후기에 해당하는 유적이지만, 후기 구석기로의 돌감선택, 도구의 소형화 등의 전환기적 특징이 확인되지 않았다. 또한 이번 분석 결과를 통해 석영 석기가 주로 출토되는 구석기 유적에서는 시간의 흐름에 따른 지속성을 확인 할 수 있었다. 이러한 결과가 분석이 이루어진 지역적인 특징인지, 석영이라는 돌감의 사용과 관련된 것인지에 대한 연구가 필요하며, 이를 바탕으로 유적의 지속적인 양상들에 대한 좀 더 명확한 해답을 찾기 위해서는 전환기에 해당하는 여러 지역의 다양한 구석기 유적들에 대한 분석과 석영이 아닌 다른 돌감이 주로 활용된 유적에 대한 연구와 비교가 더 필요하다.
10

How to Build a Web Scraper for Social Media

Lloyd, Oskar, Nilsson, Christoffer January 2019 (has links)
In recent years, the act of scraping websites for information has become increasingly relevant. However, along with this increase in interest, the internet has also grown substantially and advances and improvements to websites over the years have in fact made it more difficult to scrape. One key reason for this is that scrapers simply account for a significant portion of the traffic to many websites, and so developers often implement anti-scraping measures along with the Robots Exclusion Protocol (robots.txt) to try to stymie this traffic. The popular use of dynamically loaded content – content which loads after user interaction – poses another problem for scrapers. In this paper, we have researched what kinds of issues commonly occur when scraping and crawling websites – more specifically when scraping social media – and how to solve them. In order to understand these issues better and to test solutions, a literature review was performed and design and creation methods were used to develop a prototype scraper using the frameworks Scrapy and Selenium. We found that automating interaction with dynamic elements worked best to solve the problem of dynamically loaded content. We also theorize that having an artificial random delay when scraping and randomizing intervals between each visit to a website would counteract some of the anti-scraping measures. Another, smaller aspect of our research was the legality and ethicality of scraping. Further thoughts and comments on potential solutions to other issues have also been included.

Page generated in 0.0276 seconds