• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1102
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2117
  • 524
  • 520
  • 488
  • 435
  • 357
  • 343
  • 317
  • 282
  • 270
  • 269
  • 262
  • 236
  • 180
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Analytics for Software Product Planning

Saha, Shishir Kumar, Mohymen, Mirza January 2013 (has links)
Context. Software product planning involves product lifecycle management, roadmapping, release planning and requirements engineering. Requirements are collected and used together with criteria to define short-term plans, release plans and long-term plans, roadmaps. The different stages of the product lifecycle determine whether a product is mainly evolved, extended, or simply maintained. When eliciting requirements and identifying criteria for software product planning, the product manager is confronted with statements about customer interests that do not correspond to their needs. Analytics summarize, filter, and transform measurements to obtain insights about what happened, how it happened, and why it happened. Analytics have been used for improving usability of software solutions, monitoring reliability of networks and for performance engineering. However, the concept of using analytics to determine the evolution of a software solution is unexplored. In a context where a misunderstanding of users’ need can easily lead the effective product design to failure, the support of analytics for software product planning can contribute to fostering the realization of which features of the product are useful for the users or customers. Objective. In observation of a lack of primary studies, the first step is to apply analytics of software product planning concept in the evolution of software solutions by having an understanding of the product usage measurement. For this reason, this research aims to understand relevant analytics of users’ interaction with SaaS applications. In addition, to identify an effective way to collect right analytics and measure feature usage with respect to page-based analytics and feature-based analytics to provide decision-support for software product planning. Methods. This research combines a literature review of the state-of-the-art to understand the research gap, related works and to find out relevant analytics for software product planning. A market research is conducted to compare the features of different analytics tools to identify an effective way to collect relevant analytics. Hence, a prototype analytics tool is developed to explore the way of measuring feature usage of a SaaS website to provide decision-support for software product planning. Finally, a software simulation is performed to understand the impact of page clutter, erroneous page presentation and feature spread with respect to page-based analytics and feature-based analytics. Results. The literature review reveals the studies which describe the related work on relevant categories of software analytics that are important for measuring software usage. A software-supported approach, developed from the feature comparison results of different analytics tools, ensures an effective way of collecting analytics for product planners. Moreover, the study results can be used to understand the impact of page clutter, erroneous page representation and feature spread with respect to page-based analytics and feature-based analytics. The study reveals that the page clutter, erroneous page presentation and feature spread exaggerate feature usage measurement with the page-based analytics, but not with the feature-based analytics. Conclusions. The research provided a wide set of evidence fostering the understanding of relevant analytics for software product planning. The results revealed the way of measuring the feature usage to SaaS product managers. Furthermore, feature usage measurement of SaaS websites can be recognized, which helps product managers to understand the impact of page clutter, erroneous page presentation and feature spread between page-based and feature-based analytics. Further case study can be performed to evaluate the solution proposals by tailoring the company needs. / +46739480254
172

Visual SLAM and Surface Reconstruction for Abdominal Minimally Invasive Surgery

Lin, Bingxiong 01 January 2015 (has links)
Depth information of tissue surfaces and laparoscope poses are crucial for accurate surgical guidance and navigation in Computer Assisted Surgeries (CAS). Intra-operative Three Dimensional (3D) reconstruction and laparoscope localization are therefore two fundamental tasks in CAS. This dissertation focuses on the abdominal Minimally Invasive Surgeries (MIS) and presents laparoscopic-video-based methods for these two tasks. Different kinds of methods have been presented to recover 3D surface structures of surgical scenes in MIS. Those methods are mainly based on laser, structured light, time-of-flight cameras, and video cameras. Among them, laparoscopic-video-based surface reconstruction techniques have many significant advantages. Specifically, they are non-invasive, provide intra-operative information, and do not introduce extra-hardware to the current surgical platform. On the other side, laparoscopic-video-based 3D reconstruction and laparoscope localization are challenging tasks due to the specialties of the abdominal imaging environment. The well-known difficulties include: low texture, homogeneous areas, tissue deformations, and so on. The goal of this dissertation is to design novel 3D reconstruction and laparoscope localization methods and overcome those challenges from the abdominal imaging environment. Two novel methods are proposed to achieve accurate 3D reconstruction for MIS. The first method is based on the detection of distinctive image features, which is difficult in MIS images due to the low-texture and homogeneous tissue surfaces. To overcome this problem, this dissertation first introduces new types of image features for MIS images based on blood vessels on tissue surfaces and designs novel methods to efficiently detect them. After vessel features have been detected, novel methods are presented to match them in stereo images and 3D vessels can be recovered for each frame. Those 3D vessels from different views are integrated together to obtain a global 3D vessel network and Poisson reconstruction is applied to achieve large-area dense surface reconstruction. The second method is texture-independent and does not rely on the detection of image features. Instead, it proposes to mount a single-point light source on the abdominal wall. Shadows are cast on tissue surfaces when surgical instruments are waving in front of the light. Shadow boundaries are detected and matched in stereo images to recover the depth information. The recovered 3D shadow curves are interpolated to achieve dense reconstruction of tissue surfaces. One novel stereoscope localization method is designed specifically for the abdominal environment. The method relies on RANdom SAmple Consensus (RANSAC) to differentiate rigid points and deforming points. Since no assumption is made on the tissue deformations, the proposed methods is able to handle general tissue deformations and achieve accurate laparoscope localization results in the abdominal MIS environment. With the stereoscope localization results and the large-area dense surface reconstruction, a new scene visualization system, periphery augmented system, is designed to augment the peripheral areas of the original video so that surgeons can have a larger field of view. A user-evaluation system is designed to compare the periphery augmented system with the original MIS video. 30 subjects including 4 surgeons specialized in abdominal MIS participate the evaluation and a numerical measure is defined to represent their understanding of surgical scenes. T-test is performed on the numerical errors and the null hypothesis that the periphery augmented system and the original video have the same mean of errors is rejected. In other words, the results validate that the periphery augmented system improves users' understanding and awareness of surgical scenes.
173

Viability of Feature Detection on Sony Xperia Z3 using OpenCL

Danielsson, Max, Sievert, Thomas January 2015 (has links)
Context. Embedded platforms GPUs are reaching a level of perfor-mance comparable to desktop hardware. Therefore it becomes inter-esting to apply Computer Vision techniques to modern smartphones.The platform holds different challenges, as energy use and heat gen-eration can be an issue depending on load distribution on the device. Objectives. We evaluate the viability of a feature detector and de-scriptor on the Xperia Z3. Specifically we evaluate the the pair basedon real-time execution, heat generation and performance. Methods. We implement the feature detection and feature descrip-tor pair Harris-Hessian/FREAK for GPU execution using OpenCL,focusing on embedded platforms. We then study the heat generationof the application, its execution time and compare our method to twoother methods, FAST/BRISK and ORB, to evaluate the vision per-formance. Results. Execution time data for the Xperia Z3 and desktop GeForceGTX660 is presented. Run time temperature values for a run ofnearly an hour are presented with correlating CPU and GPU ac-tivity. Images containing comparison data for BRISK, ORB andHarris-Hessian/FREAK is shown with performance data and discus-sion around notable aspects. Conclusion. Execution times on Xperia Z3 is deemed insufficientfor real-time applications while desktop execution shows that there isfuture potential. Heat generation is not a problem for the implemen-tation. Implementation improvements are discussed to great lengthfor future work. Performance comparisons of Harris-Hessian/FREAKsuggest that the solution is very vulnerable to rotation, but superiorin scale variant images. Generally appears suitable for near duplicatecomparisons, delivering much greater number of keypoints. Finally,insight to OpenCL application development on Android is given
174

Optimization methods for inventive design / Méthodes d’optimisation pour la conception inventive

Lin, Lei 01 April 2016 (has links)
La thèse traite des problèmes d'invention où les solutions des méthodes d'optimisation ne satisfont pas aux objectifs des problèmes à résoudre. Les problèmes ainsi définis exploitent, pour leur résolution, un modèle de problème étendant le modèle de la TRIZ classique sous une forme canonique appelée "système de contradictions généralisées". Cette recherche instrumente un processus de résolution basé sur la boucle simulation-optimisation-invention permettant d'utiliser à la fois des méthodes d'optimisation et d'invention. Plus précisément, elle modélise l'extraction des contractions généralisées à partir des données de simulation sous forme de problèmes d'optimisation combinatoire et propose des algorithmes donnant toutes les solutions à ces problèmes. / The thesis deals with problems of invention where solutions optimization methods do not meet the objectives of problems to solve. The problems previuosly defined exploit for their resolution, a problem extending the model of classical TRIZ in a canonical form called "generalized system of contradictions." This research draws up a resolution process based on the loop simulation-optimization-invention using both solving methods of optimization and invention. More precisely, it models the extraction of generalized contractions from simulation data as combinatorial optimization problems and offers algorithms that offer all the solutions to these problems.
175

Probabilistic Shape Parsing and Action Recognition Through Binary Spatio-Temporal Feature Description

Whiten, Christopher J. January 2013 (has links)
In this thesis, contributions are presented in the areas of shape parsing for view-based object recognition and spatio-temporal feature description for action recognition. A probabilistic model for parsing shapes into several distinguishable parts for accurate shape recognition is presented. This approach is based on robust geometric features that permit high recognition accuracy. As the second contribution in this thesis, a binary spatio-temporal feature descriptor is presented. Recent work shows that binary spatial feature descriptors are effective for increasing the efficiency of object recognition, while retaining comparable performance to state of the art descriptors. An extension of these approaches to action recognition is presented, facilitating huge gains in efficiency due to the computational advantage of computing a bag-of-words representation with the Hamming distance. A scene's motion and appearance is encoded with a short binary string. Exploiting the binary makeup of this descriptor greatly increases the efficiency while retaining competitive recognition performance.
176

The Feature Creep Perception in Game Development : Exploring the role of feature creep in development methods and employee engagement / Upplevelsen av feature creep inom spelutveckling : En undersökning om vilken roll feature creep har i utvecklingsmetoder och arbetsengagemang

Neuhofer, Erik Joachim, Zelenka af Rolén, Samuel January 2021 (has links)
Game developers often find themselves coming up with ideas along the production period of a game varying in size and may go unnoticed or seem insignificant to the scope of the project but in the long run add up to break deadlines, budget and affect the morale and engagement of developers. In the modern game development industry agile development methods have increased in popularity allowing flexibility in the development process. This agile approach has emerged from traditional software development where waterfall development methods are common practice (Kanode and Haddad, 2009). Through in-depth interviews with developers from Sweden, Finland, and the United Kingdom this study aims to explore how feature creep is perceived by the individual developer and its effect on the day-to-day development. The ambition is to establish whether feature creeping as a phenomenon can be a useful tool for innovation and work culture. / Spelutvecklare får kontinuerligt idéer under produktionen som varierar i storlek och kan gå obemärkta eller anses meningslösa för projektets ramar och över tid och kan påverka deadline, budget, moral och engagemang hos spelutvecklare. I dagens spelindustri har agila arbetsmetoder ökat i popularitet som möjliggör flexibla utvecklingsprocesser. De agila arbetssättet har vuxit fram ur traditionell mjukvaruutveckling där vattenfallsmetoder är ofta förekommit (Kanode och Haddad, 2009). Genom ingående intervjuer med utvecklare från Sverige, Finland och Storbritannien undersöker den här studien hur feature creep upplevs av den enskilda spelutvecklaren och dess effekt på det dagliga utvecklandet i spelbranschen med en ambition att se hur fenomenet feature creep kan vara ett användbart verktyg för innovation och arbetskulturen.
177

Kardiovaskuläres Magnetresonanztomographie-gestütztes Feature Tracking: Methodenvergleich und Reproduzierbarkeit / Cardiovascular magnetic resonance feature-tracking: intervendor agreement and considerations regarding reproducibility

Stahnke, Vera-Christine 08 July 2020 (has links)
No description available.
178

Investigating maintenance costs using response feature analysis

Hansson, Linus January 2022 (has links)
Svenska Kraftnät (Svk) is responsible for ensuring that Sweden has a safe, sustainable, and cost-effective transmission system for electricity. In an effort to reduce costs, Svk has participated in a study where it has been determined that there are mostly costs for common facilities that stand out cost-wise. The goal of this master thesis is to identify and assess what factors influence maintenance costs for the substations in the Swedish national grid for electricity. Response feature modeling was applied on longitudinal data for the substations (N=53) for years 2017-2020 to obtain individual intercepts with a common slope for further analysis. The factors included in the global model were based on Pearson correlation analysis and consultation with experts. Further attempts to improve upon the global model were made based on a stepwise variable selection made by comparing AIC, a log-transformation of the response, and by applying expert knowledge to attain a subset of predictors. all the resulting models were significant (P<0.001) with the best model having an R2 of 0.8376. Predictions for a proposed substation was made for the first fifty years of lifespan. Predictors that were found significant in multiple models includes variables regarding substation size and age. Factors that were not significant in any model related to substation fence length and location amongst others. / Svenska kraftnät (Svk) är ansvariga för att försäkra att Sverige har ett säkert, hållbart och kostnadseffektivt transmissionsnät för elektricitet. I ett försök att minska kostnader deltog Svk i en studie där det fastslogs att det var främst kostnader för deras stamnnätsstationer som sticker ut kostnadsmässigt. Målet i den här uppsatsen är att identifiera och värdera faktorer som kan influera underhållskostnader för stamnätsstationerna i det Svenska stamnätet. Response feature modeling applicerades på longitudinell data (N=53) för åren 2017-2020. Individuella intercept skattades med en gemensam lutning för att användas vid vidare analys. Faktorerna som initialt inkluderades i den globala modellen var antingen inkluderade eller exkluderade genom en analys av Pearson korrelationer samt i rådfrågan med experter med sakkunskap. Vidare försök att förbättra den globala modellen gjordes genom att applicera en stegvis variable selection baserad på AIC. Samma process genomfördes efter en log-transformation av responsvariabeln. Slutligen skapades en ytterligare modell med en delmängd faktorer skapad av experter på Svk. Samtliga regressionsmodeller var signifikanta (P<0.001) där den bästa modellen med avseende på R2 hade värdet 0.8376. Skattningar av framtida kostnader för en föreslagen stations första femtio år av dess livsspann gjordes. Faktorer som var signifikanta (P<0.05) inkluderade variabler som beskrev stationsstorlek, ställverkstyp och stationsålder. Faktorer som inte var signifikanta i någon modell var bland annat faktorer med anknytning till geografiskt läge, staketlängd bland andra.
179

Multi-Source Fusion for Weak Target Images in the Industrial Internet of Things

Mao, Keming, Srivastava, Gautam, Parizi, Reza M., Khan, Mohammad S. 01 May 2021 (has links)
Due to the influence of information fusion in Industrial Internet of Things (IIoT) environments, there are many problems, such as weak intelligent visual target positioning, disappearing features, large error in visual positioning processes, and so on. Therefore, this paper proposes a weak target positioning method based on multi-information fusion, namely the “confidence interval method”. The basic idea is to treat the brightness and gray value of the target feature image area as a population with a certain average and standard deviation in IIoT environments. Based on the average and the standard deviation, and using a reasonable confidence level, a critical threshold is obtained. Compared with the threshold obtained by the maximum variance method, the obtained threshold is more suitable for the segmentation of key image features in an environment in which interference is present. After interpolation and de-noising, it is applied to mobile weak target location of complex IIoT systems. Using the metallurgical industry for experimental analysis, results show that the proposed method has better performance and stronger feature resolution.
180

Evaluating machine learning strategies for classification of large-scale Kubernetes cluster logs

Sarika, Pawan January 2022 (has links)
Kubernetes is a free, open-source container orchestration system for deploying and managing Docker containers that host microservices. Its cluster logs are extremely helpful in determining the root cause of a failure. However, as systems become more complex, locating failures becomes more difficult and time-consuming. This study aims to identify the classification algorithms that accurately classify the given log data and, at the same time, require fewer computational resources. Because the data is quite large, we begin with expert-based feature selection to reduce the data size. Following that, TF-IDF feature extraction is performed, and finally, we compare five classification algorithms, SVM, KNN, random forest, gradient boosting and MLP using several metrics. The results show that Random forest produces good accuracy while requiring fewer computational resources compared to other algorithms.

Page generated in 0.0468 seconds