• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 20
  • 8
  • 3
  • 1
  • Tagged with
  • 61
  • 61
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Počítání unikátních aut ve snímcích / Unique Car Counting

Uhrín, Peter January 2021 (has links)
Current systems for counting cars on parking lots usually use specialized equipment, such as barriers at the parking lot entrance. Usage of such equipment is not suitable for free or residential parking areas. However, even in these car parks, it can help keep track of their occupancy and other data. The system designed in this thesis uses the YOLOv4 model for visual detection of cars in photos. It then calculates an embedding vector for each vehicle, which is used to describe cars and compare whether the car has changed over time at the same parking spot. This information is stored in the database and used to calculate various statistical values like total cars count, average occupancy, or average stay time. These values can be retrieved using REST API or be viewed in the web application.
52

Aplikace pro přípravu zkoušek / Application for Exam Preparation

Líbal, Tomáš January 2021 (has links)
This thesis deals with the issue of preparation of final exams at the Faculty of Information Technology of Brno University of Technology. It describes the process of design and implementation of a web application that allows teachers to create and manage room schemes and terms of individual exams. An important part of the application is also the automatic placement of students in the rooms and the generation of individual exam assignments for printing based on the given template and the method of placement. The application provides students with a clear view of individual exam terms and details about them. The work will result in a functional and usable web application written in Java using the Spring and Angular frameworks.
53

Händelsekonstruktion genom säkrande och analys av data från ett hemautomationssystem / Event Reconstruction by Securing and Analyzing Data from a Home Automation System

Baghyari, Roza, Nykvist, Carolina January 2019 (has links)
I detta examensarbete har tidsstämplar extraherats ur ett forensiskt perspektiv från ett hemautomationssystem med styrenheten Homey från Athom. Först konstruerades ett fiktivt händelsescenario gällande ett inbrott i en lägenhet med ett hemautomationssystem. Hemautomationssystemet bestod av flera perifera enheter som använde olika trådlösa nätverksprotokoll. Enheterna triggades under händelsescenariot. Därefter testades olika metoder för att få ut data i form av tidsstämplar. De metoder som testades var rest-API, UART och chip-off på flashminnet medan JTAG inte hanns med på grund av tidsbrist. Den metod som gav bäst resultat var rest-API:t som möjliggjorde extrahering av alla tidsstämplar samt information om alla enheter. I flashminnet hittades alla tidsstämplar, men det var inte möjligt att koppla ihop dessa tidsstämplar med en specifik enhet utan att använda information från rest-API:t. Trots att rest-API:t gav bäst resultat så var det den metod som krävde en mängd förutsättningar i form av bland annat inloggningsuppgifter eller en rootad mobil. Med hjälp av de extraherade tidsstämplarna rekonstruerades sedan händelsescenariot för inbrottet. / The purpose of this bachelor thesis was to extract timestamps from a home automation system with a control unit named Homey in a forensic perspective. The first step was to create a course of event regarding a burglar breaking into an apartment with home automation. The home automation system consisted of some peripheral units using different types of wireless network protocols. All these units were triggered during the break in. Thereafter different types of methods were tested in an attempt to extract the timestamps for each unit. These methods included rest-API, UART and chip-off on a flash memory. The method using JTAG were not tested due to lack of time. Rest-API was the method that provided most information about the units and time stamps. The flash memory also contained every timestamp, however it did not provide any information about which timestamp belonged to which unit. Even though the rest-API was the best method to extract data, it was also the method with most requirements such as credentials or a rooted smartphone. With the extracted timestamps it was possible to reconstruct the course of events of the break-in.
54

Nstroj pro podporu managementu rizik / Risk Management Support Tool

kutov, Sra January 2020 (has links)
This master thesis deals with risk management in projects. The goal was to study the knowledge areas of project management, especially planning, identification and qualitative analysis of risk management. Based on the information from the required area, a tool was designed to help visualize the risks on the project while supporting their identification and analysis. The designed tool was programmed with the combination of Java Spring Boot server and React client. At the end of this thesis you can find result of the testing and possible future expansion of the application.
55

Fuzz testování REST API / Fuzz Testing of REST API

Segedy, Patrik January 2020 (has links)
Táto práca sa zaoberá fuzz testovaním REST API. Po prezentovaní prehľadu techník používaných pri fuzz testovaní a posúdení aktuálnych nástrojov a výskumu zameraného na REST API fuzz testovanie, sme pristúpili k návrhu a implementácii nášho REST API fuzzeru. Základom nášho riešenia je odvodzovanie závislostí z OpenAPI formátu popisu REST API, umožňujúce stavové testovanie aplikácie. Náš fuzzer minimalizuje počet po sebe nasledujúcich 404 odpovedí od aplikácie a testuje aplikáciu viac do hĺbky. Problém prehľadávania dostupných stavov aplikácie je riešený pomocou usporiadania závislostí tak, aby sa maximalizovala pravdepodobnosť získania potrebných vstupných dát pre povinné parametre, v kombinácii s rozhodovaním, ktoré povinné parametre môžu využívať aj náhodne generované hodnoty. Implementácia je rozšírením Schemathesis projektu, ktorý generuje vstupy za pomoci Hypothesis knižnice. Implementovaný fuzzer je použitý na testovanie Red Hat Insights aplikácie, kde našiel 32 chýb, z čoho jednu chybu je možné reprodukovať len za pomoci stavového testovania.
56

Informační systém návštěvnického centra / Information System of the Visitor Center

Sokl, Karel January 2014 (has links)
Thesis is focused on creation of an information system for a visitor center in Olomouc. This document describes whole lifecycle of that reservation system, including requirements specification, system design, its implementation and testing. Final system will be deployed in museum, where it will handle the control of turnstiles, ticket reservations and sales.
57

Datové rozhraní pro sdílení "městských dat" / Data Interface for Sharing of "City Data"

Fiala, Jan January 2021 (has links)
The goal of this thesis is to explore existing solutions of closed and open data sharing, propose options of sharing non-public data, implement selected solution and demonstrate the functionality of the system for sharing closed data. Implementation output consist of a catalog of non-public datasets, web application for administration of non-public datasets, application interface gateway and demonstration application.
58

More tools for Canvas : Realizing a Digital Form with Dynamically Presented Questions and Alternatives

Sarwar, Reshad, Manzi, Nathan January 2019 (has links)
At KTH, students who want to start their degree project must complete a paper form called “UT-EXAR: Ansökan om examensarbete/application for degree project”. The form is used to determine students’ eligibility to start a degree project, as well as potential examiners for the project. After the form is filled in and signed by multiple parties, a student can initiate his or her degree project. However, due to the excessively time-consuming process of completing the form, an alternative solution was proposed: a survey in the Canvas Learning Management System (LMS) that replace s the UT-EXAR form. Although the survey reduces the time required by students to provide information and find examiners, it is by no means the most efficient solution. The survey suffers from multiple flaws, such as asking students to answer unnecessary questions, and for certain questions, presenting students with more alternatives than necessary. The survey also fails to automatically organize the data collected from the students’ answers; hence administrators must manually enter the data into a spreadsheet or other record. This thesis proposes an optimized solution to the problem by introducing a dynamic survey. Moreover, this dynamic survey uses the Canvas Representational State Transfer (REST) API to access students’ program-specific data. Additionally, this survey can use data provided by students when answering the survey questions to dynamically construct questions for each individual student as well as using information from other KTH systems to dynamically construct customized alternatives for each individual student. This solution effectively prevents the survey from presenting students with questions and choices that are irrelevant to their individual case. Furthermore, the proposed solution directly inserts the data collected from the students into a Canvas Gradebook. In order to implement and test the proposed solution, a version of the Canvas LMS was created by virtualizing each Canvas-based microservice inside of a Docker container and allowing the containers to communicate over a network. Furthermore, the survey itself used the Learning Tools Interoperability (LTI) standard. When testing the solution, it was seen that the survey has not only successfully managed to filter the questions and alternative answers based on the user’s data, but also showed great potential to be more efficient than a survey with statically-presented data. The survey effectively automates the insertion of the data into the gradebook. / På KTH, studenter som skall påbörja sitt examensarbete måste fylla i en blankett som kallas “UT-EXAR: Ansökan om examensarbete/application for degree project”. Blanketten används för att bestämma studenters behörighet för att göra examensarbete, samt potentiella examinator för projektet. Efter att blanketten är fylld och undertecknad av flera parter kan en student påbörja sitt examensarbete. Emellertid, på grund av den alltför tidskrävande processen med att fylla blanketten, var en alternativ lösning föreslås: en särskild undersökning i Canvas Lärplattform (eng. Learning Management System(LMS)) som fungerar som ersättare för UT-EXAR-formulär. Trots att undersökningen har lyckats minska den tid som krävs av studetenter för att ge information och hitta examinator, det är inte den mest effektiva lösningen. Undersökningen lider av flera brister, såsom att få studenterna att svara på fler frågor än vad som behövs, och för vissa frågor, presenterar studenter med fler svarsalternativ än nödvändigt. Undersökningen inte heller automatiskt med att organisera data som samlats in från studenters svar. Som ett resultat skulle en administratör behöva organisera data manuellt i ett kalkylblad. Detta examensarbete föreslår en mer optimerad lösning på problemet: omskrivning av undersökningens funktionaliteter för att använda Representational State Transfer(REST) API för att komma åt studenters programspecifika data i back-end, såväl att använda speciella haschar för att hålla referenser till uppgifter som lämnas av studenterna när de svarar på frågorna i undersökningen, så att undersökningen inte bara kan använda dessa data för att dynamiskt konstruera frågor för varje enskild student, men också dynamiskt konstruera svarsalternativ för varje enskild student. Denna lösning förhindrar effektivt undersökningen från att presentera studenter med frågor och valbara svarsalternativ som är helt irrelevanta för var och en av deras individuella fall. Med den föreslagna lösningen kommer undersökningen dessutom att kunna organisera de data som samlats in från Studenterna till ett speciellt Canvas-baserat kalkyllblad, kallas som Betygsbok. För att genomföra och testa den förslagna lösningen skapades en testbar version av Canvas LMS genom att virtualisera varje Canvas-baserad mikroservice inuti en dockercontainer och tillåter containers att kommunicera över ett nätverk. Dessutom var undersökningen själv konfigurerad för att använda Lärverktyg Interoperability (LTI) standard. Vid testning av lösningen, det visade sig att undersökningen på ett sätt effektivt har lyckats använda vissa uppgifter från en testanvändare att bara endast svara på de relevanta frågorna, men också presentera användaren med en mer kondenserad lista svarsalternativ över baserat på data.<p>
59

A FRAMEWORK FOR IMPROVED DATA FLOW AND INTEROPERABILITY THROUGH DATA STRUCTURES, AGRICULTURAL SYSTEM MODELS, AND DECISION SUPPORT TOOLS

Samuel A Noel (13171302) 28 July 2022 (has links)
<p>The agricultural data landscape is largely dysfunctional because of the industry’s highvariability  in  scale,  scope,  technological  adoption,  and  relationships.   Integrated  data  andmodels of agricultural sub-systems could be used to advance decision-making, but interoperability  challenges  prevent  successful  innovation.   In  this  work,  temporal  and  geospatial indexing  strategies  and  aggregation  were  explored  toward  the  development  of  functional data  structures  for  soils,  weather,  solar,  and  machinery-collected  yield  data  that  enhance data context, scalability, and sharability.</p> <p>The data structures were then employed in the creation of decision support tools including web-based  applications  and  visualizations.   One  such  tool  leveraged  a  geospatial  indexing technique called geohashing to visualize dense yield data and measure the outcomes of on-farm yield trials.  Additionally, the proposed scalable, open-standard data structures were used to drive a soil water balance model that can provide insights into soil moisture conditions critical to farm planning, logistics, and irrigation.  The model integrates SSURGO soil data,weather data from the Applied Climate Information System, and solar data from the National Solar Radiation Database in order to compute a soil water balance, returning values including runoff, evaporation, and soil moisture in an automated, continuous, and incremental manner.</p> <p>The approach leveraged the Open Ag Data Alliance framework to demonstrate how the data structures can be delivered through sharable Representational State Transfer Application Programming Interfaces and to run the model in a service-oriented manner such that it can be operated continuously and incrementally, which is essential for driving real-time decision support tools.  The implementations rely heavily on the Javascript Object Notation data schemas leveraged by Javascript/Typescript front-end web applications and back-end services delivered through Docker containers.  The approach embraces modular coding concepts and several levels of open source utility packages were published for interacting with data sources and supporting the service-based operations.</p> <p>By making use of the strategies laid out by this framework, industry and research canenhance data-based decision making through models and tools.  Developers and researchers will  be  better  equipped  to  take  on  the  data  wrangling  tasks  involved  in  retrieving  and parsing unfamiliar datasets, moving them throughout information technology systems, and understanding those datasets down to a semantic level.</p>
60

Získávání informací o uživatelích na webových stránkách / Browser and User Fingerprinting for Practical Deployment

Vondráček, Tomáš January 2021 (has links)
The aim of the diploma thesis is to map the information provided by web browsers, which can be used in practice to identify users on websites. The work focuses on obtaining and subsequent analysis of information about devices, browsers and side effects caused by web extensions that mask the identity of users. The acquisition of information is realized by a designed and implemented library in the TypeScript language, which was deployed on 4 commercial websites. The analysis of the obtained information is carried out after a month of operation of the library and focuses on the degree of information obtained, the speed of obtaining information and the stability of information. The dataset shows that up to 94 % of potentially different users have a unique combination of information. The main contribution of this work lies in the created library, design of new methods of obtaining information, optimization of existing methods and the determination of quality and poor quality information based on their level of information, speed of acquisition and stability over time.

Page generated in 0.0386 seconds