• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 688
  • 125
  • 45
  • 21
  • 20
  • 18
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 7
  • 7
  • 6
  • Tagged with
  • 1174
  • 1174
  • 445
  • 351
  • 187
  • 104
  • 97
  • 87
  • 79
  • 74
  • 73
  • 69
  • 68
  • 56
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Latency-optimized edge computing in Fifth Generation (5G) cellular networks

Haavisto, J. (Juuso) 12 October 2018 (has links)
The purpose of this thesis is to research latency-optimized edge computing in 5G cellular networks. In specific, the research focuses on low-latency software-defined services on open-source software (OSS) core network implementations. A literature review revealed that there are few OSS implementations of Long Term Evolution (LTE) (let alone 5G) core networks in existence. It was also found out that OSS is essential in research to allow latency optimizations deep in the software layer. These optimizations were found hard or impossible to install on proprietary systems. As such, to achieve minimal latency in end-to-end (E2E) over-the-air (OTA) testing, an OSS core network was installed at the University of Oulu to operate in conjunction with the existing proprietary one. This thesis concludes that a micro-operator can be run on current OSS LTE core network implementations. Latency-wise, it was found that current LTE modems are capable of achieving an E2E latency of around 15ms in OTA testing. As a contribution, an OSS infrastructure was installed to the University of Oulu. This infrastructure may serve the needs of academics better than a proprietary one. For example, experimentation of off-the-specification functionality in core networks should be more accessible. The installation also enables easy addition of arbitrary hardware. This might be useful in research on tailored services through mobile edge computing (MEC) in the micro-operator paradigm. Finally, it is worth noting that the test network at Oulu University is operating at a rather small scale. Thus, it remains an open question if and how bigger mobile network operators (MNOs) can provide latency-optimized services while balancing with throughput and quality of service (QoS). / Tämän opinnäytetyön tarkoituksena on tutkia vasteaikaoptimoitua reunalaskentaa 5G matkapuhelinverkoissa. Tarkemmin määritellen, työn tarkoituksena on keskittyä alhaisen latenssin palveluihin, jotka toimivat avoimen lähdekoodin ydinverkkoimplementaatioiden päällä. Kirjallisuuskatsaus osoitti että vain pieni määrä avoimen lähdekoodin toteutuksia LTE verkkoimplementaatioista on saatavilla. Lisäksi havainnointiin että avoimen lähdekoodin ohjelmistot ovat osa latenssitutkimusta, jotka vaativat optimointeja syvällä ohjelmistorajapinnassa. Minimaalisen vasteajan saavuttamiseksi, avoimen lähdekoodin ydinverkko asennettiin Oulun yliopistolla toimimaan rinnakkain olemassaolevan suljetun järjestelmän kanssa. Tämä opinnäytetyön johtopäätöksien mukaan mikro-operaattori voi toimia nykyisten avoimen lähdekoodin LTE ydinverkkojen avulla. Vasteajaksi kahden laitteen välillä saavutettiin noin 15ms. Kontribuutioksi lukeutui avoimen lähdekoodin radioverkkoinfrastruktuurin asentaminen Oulun yliopistolle. Tämä avoin infrastruktuuri voinee palvella tutkijoiden tarpeita paremmin kuin suljettu järjestelmä. Esimerkiksi, ydinverkkojen testaus virallisten määrittelyn ulkopuolisilla ominaisuuksilla pitäisi olla helpompaa kuin suljetulla järjestelmällä. Lisäksi asennus mahdollistaa mielivaltaisen laskentaraudan lisäämisen mobiiliverkkoon. Tämä voi olla hyödyllistä räätälöityjen reunalaskentapalveluiden tutkimuksessa mikro-operaattoreiden suhteen. Lopuksi on hyvä mainita että Oulun yliopiston testiverkko toimii suhteellisen pienellä skaalalla. Täten kysymykseksi jää miten suuremmat mobiiliverkkojen tarjoajat voivat toteuttaa vasteaikaoptimoituja palveluita suoritustehoa ja palvelunlaatua uhraamatta.
242

Käytettävyysarviointimenetelmien soveltuvuus kuntosaliympäristössä

Rajala, J. (Joni) 19 November 2018 (has links)
Kuntoliikunnan suosion lisääntyessä kuntosalille kehitettävien ohjelmistojen ja laitteiden kysyntä kasvaa. Kun ihmiset kiinnostus tietynlaista teknologiaa kohtaan kasvaa, samaiselta teknologialta odotetaan enemmän, ja palautetta käytettävyydestä alkaa virrata. Liikuntaa harrastaessa ihminen ei kykene pysähtymään niin kuin tavallisessa liikunnassa, joten tällainen mobiili konteksti eroaa tavallisesta ohjelmiston käytöstä. Viime aikoina mobiililaitteiden käytettävyyteen on kiinnitetty paljon huomiota, ja niistä löytyy paljonkin tutkimusta, mutta liikuntaan erikoistuvia tutkimuksia on hyvin vähän. Tässä tutkimuksessa perehdytään salin erityispiirteisiin käytettävyyden näkökulmasta perehtymällä yleistä käytettävyyttä ja mobiilikontekstia koskevaan kirjallisuuteen. Lopputuloksena pyritään luomaan käsitys siitä, miten liikunnan rajoitteet ja siitä saadut edut vaikuttavat käytettävyyteen kuntosalilla.
243

Mobiililaitteiden käytettävyyssuunnittelu

Launonen, P.-P. (Pirkka-Pekka) 21 November 2018 (has links)
Tämän kirjallisuuskatsauksen tarkoituksena on tarkastella mobiilisovelluksien ja mobiilialustojen käytettävyyttä sekä niihin liittyviä teorioita. Monimutkaiset tietokonejärjestelmät ovat osa jokaisen arkipäivää nykyään ja niinpä kyseisiä järjestelmiä löytyy jokaisesta kodista. Näin ollen on syytä tutkia niitä suunnitteluhaasteita, joita mobiililaitteiden omat sovellukset ja käyttöjärjestelmät tuovat. Mobiilikäytettävyyden kannalta tulisikin miettiä ennen kaikkea sitä, mitkä toiminnot ovat oleellisia mobiililaitteen käyttäjälle. Tutkimalla mobiilikäytettävyyden ja -suunnittelun ongelmakohtia voidaan koostaa yleisesti hyviä suunnitteluperiaatteita sekä ohjeita käytettävyyssuunnittelua ajatellen. Tutkimusongelmana tarkastellaan sitä, minkälaiset asiat vaikuttavat mobiililaitteen käytettävyyssuunnitteluun ja käytettävyyteen ylipäätään. Tässä tutkimuksessa tarkastellaan myös, onko perinteisten ohjelmistosuunnittelun ja mobiilisovellusten suunnittelun välillä eroja. Tämän kirjallisuuskatsaus osoittaa, että mobiililaitteiden käytettävyyteen vaikuttaa niiden erilainen konteksti verrattuna perinteisiin tietokoneella toimiviin käyttöliittymiin ja sovelluksiin. Kompleksiset toiminnot lisäävät käyttäjien turhautumista sekä huonontavat asenteita mobiililaitteita kohtaan. Mobiililaitteen fyysiset ominaisuudet, kuten näytön koko, tuovat myös erilaisia käytettävyyssuunnitteluongelmia, jotka täytyy huomioida suunnittelijoiden toimesta. Myös tarve mobiilikäytettävyysongelmien omalle evaluoinnille on tarpeen. Tämän kirjallisuuskatsauksen osalta ei löytynyt tutkimuksia nykyaikaisista mobiililaitteista, joissa on kosketusnäyttö. Näin ollen tämä kirjallisuuskatsaus suosittaa, että tutkimuksia tulisi tehdä myös nykyaikaisille mobiililaitteille. Tämän kirjallisuuskatsauksen aikana ei nykyaikaisia tutkimuksia löytynyt, tai ne eivät soveltuneet tämän kirjallisuuskatsauksen teemaan.
244

Making sense of online news on climate change:a sentiment analysis approach

Palokangas, T. (Teemu) 15 November 2018 (has links)
Sentiment analysis is a data mining approach used especially to make sense of the ever-increasing volumes of online media in order to figure out what people think of products, services or issues. Applications to journalism as a form of public debate on wider political issues have been in the minority. This thesis took a form of a literature review to investigate sentiment analysis as an approach that could be used to extend and enhance existing media research methods, particularly that of framing, which concerns the viewpoints that get into the media frame and thus to the public eye. Online news of climate change were used as a topical focus of this treatment. The thesis covered the recent relevant sentiment analysis research to outline the developments, possibilities and challenges of using sentiment analysis to extend framing research. The results of the study indicated that from a journalistic point of view, sentiment analysis research is developing to a good direction, to approaches where the value of domain and topic specific methodologies are increasingly emphasized in addition to more general classification into good-bad or positive-negative. The value of combining sentiment analysis and journalistic framing research could be in making better sense of public debate and shaping of public opinion in important, complex issues such as climate change. / Sentimenttianalyysi on tiedonlouhintamenetelmä, josta on erityistä hyötyä alati lisääntyvän verkkomedian ymmärtämisessä. Menetelmää voi käyttää, kun yritetään saada selkoa siitä, mitä ihmiset ajattelevat tuotteista, palveluista tai ilmiöistä. Journalismin ja erityisesti sen poliittiseen debattiin keskittyvän julkisen keskustelun ymmärtämisessä sentimenttianalyysin rooli on ollut vähäisempi. Tämä kandidaatintutkielma on kirjallisuuskatsaus, joka pohtii miten sentimenttianalyysiä voisi hyödyntää mediatutkimuksessa käytetyn kehystämisen (engl. framing) laajentamisessa ja tehostamisessa. Kehystämisessä on kyse niistä median tekemistä rajauksista, jotka määrittävät julkisuuteen pääsevät näkökulmat. Aiherajauksena tässä tutkielmassa on käytetty ilmastonmuutosta käsitteleviä verkkouutisia. Tutkielma käsittelee tuoretta sentimenttianalyysitutkimusta hahmotellessaan lähestymistavan kehitystä, mahdollisuuksia ja haasteita suhteessa kehystämiseen. Sentimenttianalyysin viimeaikaiset kehityslinjat, kuten keskittyminen yhtä tekstiä pienempiin analyysiyksiköihin tai tarkempi käsiteltävien aihepiirien huomioiminen, tukevat sen hyödyntämistä myös mediatutkimuksen välineistössä. Sentimenttianalyysin ja journalistisen kehystämistutkimuksen yhdistämisen hyötynä voidaan nähdä se, että näin voitaisiin kasvattaa analysoitavien aineistojen määrää ja näin saada myös parempi kuva ilmastonmuutoksen kaltaisten isojen ilmiöiden ympärillä käytävästä julkisesta keskustelusta ja julkisen mielipiteen muodostumisesta.
245

Android-based customizable media crowdsourcing toolkit for machine vision research

Alorwu, A. (Andy) 10 December 2018 (has links)
Smart devices have become more complex and powerful, increasing in both computational power, storage capacities, and battery longevity. Currently available online facial recognition databases do not offer training datasets with enough contextually descriptive metadata for novel scenarios such as using machine vision to detect if people in a video like each other based on their facial expressions. The aim of this research is to design and implement a software tool to enable researchers to collect videos from a large pool of people through crowdsourcing means for machine vision analysis. We are particularly interested in the tagging of the videos with the demographic data of study participants as well as data from custom post hoc survey. This study has demonstrated that smart devices and their embedded technologies can be utilized to collect videos as well as self-evaluated metadata through crowdsourcing means. The application makes use of sensors embedded within smart devices such as the camera and GPS sensors to collect videos, survey data, and geographical data. User engagement is encouraged using periodic push notifications. The collected videos and metadata using the application will be used in the future for machine vision analysis of various phenomena such as investigating if machine vision could be used to detect people’s fondness for each other based on their facial expressions and self-evaluated post-task survey data.
246

Improving video game designer workflow in procedural content generation-based game design:a design science approach

Bomström, H. (Henri) 10 December 2018 (has links)
The time and money spent on video games are rapidly increasing, as the annual U.S game industry consumer spending has reached 23.5 billion dollars. The cost of producing video game content has grown in accordance with the consumer demand. Artificial intelligence (AI) has been suggested as a way to scale production costs with the demand. In addition to lowering content production costs, AI enables the creation of new forms of gameplay that are not possible with the current toolbox of the industry. The utilization of AI in game design is currently difficult, as it requires both theoretical knowledge and practical expertise. This thesis improved game designer workflow in PCG-based game design by explicating the necessary theoretical frameworks and practical steps needed to adopt AI-based practices in game design. Game designer workflow in PCG-based game design was improved by utilizing the design science research method (DSR). The constructed artefact was determined to be a method in accordance with the DSR knowledge contribution framework, and it was evaluated by using the Quick & Simple strategy from the FEDS framework. The risks related to artefact construction were assessed in accordance with the RMF4DSR framework. The metrics used to measure the performance of the artefact were determined by employing the GQM framework. Finally, the proposed method was evaluated by following it in constructing a simple PCG-based game with an accompanying AI system. The evaluation was performed by utilizing the FEDS framework in an artificial setting. After gathering and analysing the data from the artefact construction and evaluation, the method was modified to address its shortcomings. The produced design method is the main contribution of this thesis. The proposed method lowers the threshold for adopting PCG-based game design practices, and it helps designers, developers, and researchers by creating concrete and actionable steps to follow. The necessary theoretical frameworks and decision points are presented in a single method that demystifies the process of designing PCG-based games. Additional theoretical knowledge has been contributed by studying the topic from a practical perspective and extracting requirements from an actual design process. The method can be used as a practical cookbook for PCG-based projects and as a theoretical base for further studies on PCG-based game design. Future research tasks include evaluating the proposed method in an organizational context with real users. An organizational context also warrants means to managing risks in PCG-based game design projects. Finally, generator evaluation and explicit guidance on generator control are important future research topics.
247

3D web visualization of continuous integration big data

Mattasantharam, R. (Rubini) 11 December 2018 (has links)
Continuous Integration (CI) is a practice that is used to automate the software build and its test for every code integration to a shared repository. CI runs thousands of test scripts every day in a software organization. Every test produces data which can be test results logs such as errors, warnings, performance measurements and build metrics. This data volume tends to grow at unprecedented rates for the builds that are produced in the Continuous Integration (CI) system. The amount of the integrated test results data in CI grows over time. Visualizing and manipulating the real time and dynamic data is a challenge for the organizations. The 2D visualization of big data has been actively in use in software industry. Though the 2D visualization has numerous advantages, this study is focused on the 3D representation of CI big data visualization and its advantage over 2D visualization. Interactivity with the data and system, and accessibility of the data anytime, anywhere are two important requirements for the system to be usable. Thus, the study focused in creating a 3D user interface to visualize CI system data in 3D web environment. The three-dimensional user interface has been studied by many researchers who have successfully identified various advantages of 3D visualization along with various interaction techniques. Researchers have also described how the system is useful in real world 3D applications. But the usability of 3D user interface in visualizations in not yet reached to a desirable level especially in software industry due its complex data. The purpose of this thesis is to explore the use of 3D data visualization that could help the CI system users of a beneficiary organization in interpreting and exploring CI system data. The study focuses on designing and creating a 3D user interface for providing a more effective and usable system for CI data exploration. Design science research framework is chosen as a suitable research method to conduct the study. This study identifies the advantages of applying 3D visualization to a software system data and then proceeds to explore how 3D visualization could help users in exploring the software data through visualization and its features. The results of the study reveal that the 3D visualization help the beneficiary organization to view and compare multiple datasets in a single screen space, and to see the holistic view of large datasets, as well as focused details of multiple datasets of various categories in a single screen space. Also, it can be said from the results that the 3D visualization help the beneficiary organization CI team to better represent big data in 3D than in 2D.
248

An experimental evaluation of Java Design-by-Contract extensions

Aghaei, M. (Majid) 10 December 2018 (has links)
Design by Contract (DbC), also referred as Programming by Contract is a programming paradigm for software verification proposed by Bertrand Meyer. The idea is to put obligations for code elements such as methods, interfaces and classes to satisfy the specification of the source code. Indeed, DbC enforces a piece of code to satisfy some conditions before execution (Precondition), and to ensure some conditions after execution (Postcondition) with holding some conditions unchanged (Invariant). This settlement is called Contract which must be valid for that piece of code. According to Meyer’s paradigm, a program can work correctly if it fulfills DbC principles for each method. To empower programmers with DbC, various libraries are made to make DbC possible in coding phase each of which is applicable in a specific programming language and has some features and functionalities. However, choosing the most suitable tool for coding upon a particular purpose is considerably important for development teams with software validation deployment. This thesis aimed to experimentally evaluate and compare DbC instrumentors specially for Java and figure out that which tool had better performance. In order to accomplish such a task, a simple model system had to be designed and implemented with regard to using constraining principles of mentioned tools. The scrutiny of the extensions revealed that Open JML as a powerful framework has generated better results rather than other tools. However, the results of this research is viable for small projects deploying constraining tools.
249

DevOps in Finland:study of practitioners’ perception

Paulin, T. (Tuomas) 10 December 2018 (has links)
DevOps is currently one of the latest software development practices. Lately it has gained the interest of people in academia and practice. DevOps extends Agile practices to software operations and aims to make software development process faster, more reliable and increase collaboration. Currently there are multiple studies which aim to define DevOps but only a few which try to understand and evaluate how DevOps is utilized and understood in practice and at large. The aim of this study is to investigate DevOps adoption, practices and tool usage by software professionals in Finland. In addition, the study investigates perceived benefits and challenges of DevOps adoption. A survey with an online questionnaire was selected as the method for gathering data from software practitioners in Finland. Previous literature focusing on DevOps was used to establish an understanding of DevOps and to create meaningful questions for the survey. A link to online survey questionnaire was then distributed using Slack, LinkedIn and mailing lists during Spring 2018 to Finnish practitioners. Multiple channels were selected to collect sufficient responses for analysis. A total of 81 respondents answered to the questionnaire and were from different backgrounds with respect to organization size, role and team size. Most of the participants had already adopted DevOps and clear understanding of the concept was considered the most important factor in DevOps implementation. Automation was both an important meaning of concept and also most agreed practice. Faster release cycle time and system quality were the most agreed benefits and lack of common understanding for DevOps was considered the most challenging. A multitude of different tools are used in organizations. The most popular in their own categories were Jenkins(CI), Kibana(Monitoring), Amazon AWS(Cloud) and Ansible(Config/Provisioning). Automation was considered important aspect of the DevOps concept and also in practice. Further research and qualitative data is required to find out the actual reasons behind these results. The questionnaire instrument can be reused on different target groups. Qualitative questions should be asked on organization level to find out the reasons behind different implementations of DevOps.
250

Benefits and challenges of distributed development and their role in global software development

Barsk, J. (Joni) 12 December 2018 (has links)
Globally Distributed Development has arguably become a phenomenon in the past decades. Organizations have tried to utilize benefits of Globally Distributed Development and gain an upper hand in the global market. Organizations have tried Global Software Development, since the nineties with varying degrees of success. This thesis analysed the effects that could cause an organization to begin or to give up a Globally Distributed Development process. This thesis was made as a literature review. Scientific literature was analysed to answer predetermined research questions: What has an impact in Global Software Development, which practises could be successful in Global Software Development and what differences could there be between Global Software Development and Global Game Software Development. Effects that could influence the decision either way, were distributed to benefits and challenges. Then they were further divided to processes and dimensions. Processes were: communication, coordination and control. Dimensions were divided to: temporal dimension, geographical dimension and socio-cultural dimension. A three by three grid was made from the processes and dimensions. Effects were placed within the table and analysed, to see how they might influence Distributed Development. Conclusions included that benefits could arguably be further divided to Effortless and Effortful benefits. These benefits are individual, depending on the organization, but in general, it was argued that Effortless benefits should be utilized as best as possible and Effortful benefits should be prioritized. Conclusions of challenges didn’t yield as conclusive results. Arguably, most of the challenges are somehow connected. meaning that alleviating one will likely have adverse effects in another challenge. Careful planning, execution and follow-up was recommended when organization tries to alleviate different challenges of Distributed Development. Analysis of the benefits and challenges yielded further information. Results were divided to three parts: communication, control and coordination. Most significant piece of information was the importance of planning. GSD without a well-formed plan is going to fail. Distributed development has too many moving parts to allowing hap-hazard plans and executions of those plans. Furthermore, importance of communication was highlighted within the analysis, as an integral part of success in GSD. Utilization of Outsourcing in Globally Distributed Game Development is a significant part of game development, as multiple components of game-design require skills that a programmer might not possess. Voice acting and ability to make music to name just a few. For game software organization, utilisation of outsourcing could arguably be significantly more important than to a regular software organization. Not only due to the vast array of talents needed, but also due to the short period of time they are needed. As a result, Distributed Game Software Development could be called as: single site software organization, with outsourced autonomous multi-site subsidiary task development teams. Finally, this thesis summarizes the literary review, by arguing that communication and pre-development work are arguably the two most important factors when considering the suggestions for a successful globally distributed development.

Page generated in 0.1073 seconds