• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 32
  • 30
  • 22
  • 16
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 387
  • 387
  • 97
  • 92
  • 59
  • 55
  • 50
  • 45
  • 36
  • 34
  • 33
  • 33
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Výpočet sedání výškové budovy s využitím metody konečných prvků / Prediction of foundation settlement of high-rise building using the finite element method

Červenka, Jan January 2020 (has links)
The focus of this thesis is to deal with reverse engineering of high-rise building settlements. This is modeled via the finite element method performed in the Plaxis 3D program. In the first part of this thesis, a calibration of input parameters of an appropriate material model – Hardening soil – is conducted. This calibration is a result of oedometric test data which were obtained within a geotechnical survey. An influence of soils over consolidation affecting calibration and the material model choice is described. Final values of reference stiffness parameters are used in a mathematical model of the focused area. This model is created for one half of the high-rise building plan, including vestibule. The high-rise building is founded in a foundation pit. The foundations of this building consist of raft foundation and piles of jet grouting. In the model, there are also changes in pore pressure during an excavation of foundation pit included. The functional model is used for parametric analyses, namely examining cases of object´s foundations and the possible influence of foundation pit´s symmetry on the object´s settlement. All the calculated processes in the object´s settlements are then compared to data obtained from geotechnical monitoring of the structure.
372

Návrh referenčních modelů kluzáku L23 Blaník / Design of system model of L23 Blanik sailplane

Šenkýř, Miroslav January 2013 (has links)
The aim of this diploma thesis is to create useful reference models of the glider L-23 Super Blaník and their use for framework proposal technology of production, which will improve accuracy and reduce labor intensity of production. The introduction of the thesis describes history of glider and historical overview of geometry transmission in terms of structure. The next chapter of the thesis deals with the description of reference models and rules of their creation. Section describing coordinate systems is closely linked with the previous chapter. Another section of thesis is dedicated to the importance of reference models in the product lifecycle. The creation and tuning of reference models of glider is described in the next chapter of thesis. The penultimate chapter deals with experiment, which allowed verification of reference model and product. The conclusion of the thesis describes proposed technology of production where used verified reference models and modern tools of CAD systems refine and facilitate production. The main objective in designing of reference models was the definition of high quality and usable surfaces. Also proposal technology of production was processed in order to ensure the improvement and simplification of manufacturing process.
373

Zpětný překlad vysokoúrovňových konstrukcí jazyka C++ / Decompilation of High-Level Constructions in C++ Binaries

Jakub, Dušan January 2015 (has links)
The thesis addresses the decompilation of high-level object-oriented C++ language from a machine code. The term reverse engineering is defined and existing decompilers are described with emphasis on their ability to reconstruct C++. AVG decompiler project is introduced, to which this thesis contributes. C++ language is analysed, both on a logical level and in the machine code and existing methods of decompilation are described. On this basis a novel method is introduced, capable of decompiling classes, their hierarchy, constructors, destructors and definitions and usages of virtual methods. The method is implemented, tested and evaluated. In the conclusion, several suggestions for future development of this project are presented.
374

[pt] REENGENHARIA DE SISTEMAS AUTOADAPTATIVOS GUIADA PELO REQUISITO NÃO FUNCIONAL DE CONSCIÊNCIA DE SOFTWARE / [en] SELF-ADAPTIVE SYSTEMS REENGINEERING DRIVEN BY THE SOFTWARE AWARENESS NON-FUNCTIONAL REQUIREMENT

ANA MARIA DA MOTA MOURA 11 December 2020 (has links)
[pt] Nos últimos anos, foi desenvolvido um número significativo de sistemas autoadaptativos (i.e.: sistemas capazes de saber o que está acontecendo sobre si mesmo e que, consequentemente, implementam parcialmente a qualidade de consciência). A literatura tem pesquisado extensivamente o uso da engenharia de requisitos orientada a metas e o uso da arquitetura de referência MAPE (Monitor-Analyze-Plan-Execute) para o desenvolvimento de sistemas autoadaptativos. Entretanto, construir tais sistemas com base em estratégias de referência não é trivial, podendo resultar em problemas estruturais que impactam negativamente alguns atributos de qualidade do produto final (e.g.: reusabilidade, modularidade, modificabilidade e entendibilidade). Neste contexto, estratégias de reengenharia para a reorganização de tais sistemas são pouco exploradas, limitando-se a recuperar e a reestruturar a lógica da adaptação em modelos de baixo nível. Esta prática mantém a dificuldade do tratamento da qualidade de consciência como um requisito não funcional (RNF) de primeira classe, impactando diretamente na seleção da arquite-tura e implementação do sistema. Nossa pesquisa visa mitigar esse problema atra-vés de uma estratégia de reengenharia de sistemas autoadaptativos, centrada no RNF de consciência de software, com vistas a auxiliar na remoção de alguns problemas recorrentes na implementação do MAPE conforme a literatura. A estratégia de reengenharia está organizada em quatro subprocessos: (A) recuperar a intencio-nalidade do sistema com ênfase em suas metas de consciência, gerando um modelo de metas AS-IS; (B) especificar o modelo de metas TO-BE reutilizando um conjunto de SRconstructs para operacionalizar o RNF de consciência de software conforme o padrão MAPE; (C) redesenhar o sistema revisando as operacionalizações de consciência e selecionando as tecnologias para implementar o MAPE, e; (D) finalmente, reimplementar o sistema conforme nova estrutura, adicionando metainformações de código para manter a rastreabilidade para o mecanismo de autoadaptação visando facilitar novas evoluções. O escopo da nossa pesquisa são sistemas autoadaptativos orientados a objetos (OO), utilizando o framework i como linguagem para os modelos orientados a metas. Nossos resultados de avaliações em sistemas auto-adaptativos OO desenvolvidos em Java para dispositivos móveis com Android demonstram que a estratégia auxilia no realinhamento do sistema com as boas práticas recomendadas pela literatura facilitando futuras evoluções. / [en] In recent years, a significant number of self-adaptive systems (i.e.: systems capable of knowing what is happening about themselves, and consequently partially implementing the quality of awareness) have been developed. The literature has extensively researched the use of goal oriented requirements engineering and the use of the MAPE (Monitor-Analyze-Plan-Execute) reference architecture for the development of self-adaptive systems. However, building such systems based on reference strategies is not trivial, it can result in structural problems that negatively impact some quality attributes of the final product (e.g.: reusability, modularity, modifiability and understandability). In this context, reengineering strategies for the reorganization of such systems are poor explored, and they are limited to recovering and restructuring the logic of adaptation in low-level models. This approach keeps the difficulty of treating the awareness quality as a first-class non-functional re-quirement (NFR) directly affecting architecture selection and implementation of the system. Our research aims to mitigate this problem through a strategy of reengi-neering self-adaptive systems, centered on software awareness as an NFR. This strategy will assist in the removal of some recurring problems in the implementation of MAPE according to the literature. The reengineering strategy is organized into four sub-processes: (A) recover the intentionality of the system with an emphasis on its awareness goals, generating an AS-IS goal model; (B) specify the TO-BE goal model by reusing a set of SRconstructs to operationalize the software awareness NFR according to the MAPE standard; (C) redesign the system by reviewing the operationalizations of awareness and selecting the technologies to implement the MAPE, and; (D) finally, reimplement the system according to a new structure, add-ing code metadata to maintain traceability for the self-adaptation mechanism in or-der to facilitate new evolutions. The scope of our research is object-oriented (OO) self-adaptive systems using the i framework as a language for goal-oriented models. Our results of evaluations, for OO self-adaptive systems developed in Java for mobile devices with Android, show that the strategy helps in realigning the system with the best practices recommended by the, facilitating future developments.
375

Network Inference from Perturbation Data: Robustness, Identifiability and Experimental Design

Groß, Torsten 29 January 2021 (has links)
Hochdurchsatzverfahren quantifizieren eine Vielzahl zellulärer Komponenten, können aber selten deren Interaktionen beschreiben. Daher wurden in den letzten 20 Jahren verschiedenste Netzwerk-Rekonstruktionsmethoden entwickelt. Insbesondere Perturbationsdaten erlauben dabei Rückschlüsse über funktionelle Mechanismen in der Genregulierung, Signal Transduktion, intra-zellulärer Kommunikation und anderen Prozessen zu ziehen. Dennoch bleibt Netzwerkinferenz ein ungelöstes Problem, weil die meisten Methoden auf ungeeigneten Annahmen basieren und die Identifizierbarkeit von Netzwerkkanten nicht aufklären. Diesbezüglich beschreibt diese Dissertation eine neue Rekonstruktionsmethode, die auf einfachen Annahmen von Perturbationsausbreitung basiert. Damit ist sie in verschiedensten Zusammenhängen anwendbar und übertrifft andere Methoden in Standard-Benchmarks. Für MAPK und PI3K Signalwege in einer Adenokarzinom-Zellline generiert sie plausible Netzwerkhypothesen, die unterschiedliche Sensitivitäten von PI3K-Mutanten gegenüber verschiedener Inhibitoren überzeugend erklären. Weiterhin wird gezeigt, dass sich Netzwerk-Identifizierbarkeit durch ein intuitives Max-Flow Problem beschreiben lässt. Dieses analytische Resultat erlaubt effektive, identifizierbare Netzwerke zu ermitteln und das experimentelle Design aufwändiger Perturbationsexperimente zu optimieren. Umfangreiche Tests zeigen, dass der Ansatz im Vergleich zu zufällig generierten Perturbationssequenzen die Anzahl der für volle Identifizierbarkeit notwendigen Perturbationen auf unter ein Drittel senkt. Schließlich beschreibt die Dissertation eine mathematische Weiterentwicklung der Modular Response Analysis. Es wird gezeigt, dass sich das Problem als analytisch lösbare orthogonale Regression approximieren lässt. Dies erlaubt eine drastische Reduzierung des nummerischen Aufwands, womit sich deutlich größere Netzwerke rekonstruieren und neueste Hochdurchsatz-Perturbationsdaten auswerten lassen. / 'Omics' technologies provide extensive quantifications of components of biological systems but rarely characterize the interactions between them. To fill this gap, various network reconstruction methods have been developed over the past twenty years. Using perturbation data, these methods can deduce functional mechanisms in gene regulation, signal transduction, intra-cellular communication and many other cellular processes. Nevertheless, this reverse engineering problem remains essentially unsolved because inferred networks are often based on inapt assumptions, lack interpretability as well as a rigorous description of identifiability. To overcome these shortcoming, this thesis first presents a novel inference method which is based on a simple response logic. The underlying assumptions are so mild that the approach is suitable for a wide range of applications while also outperforming existing methods in standard benchmark data sets. For MAPK and PI3K signalling pathways in an adenocarcinoma cell line, it derived plausible network hypotheses, which explain distinct sensitivities of PI3K mutants to targeted inhibitors. Second, an intuitive maximum-flow problem is shown to describe identifiability of network interactions. This analytical result allows to devise identifiable effective network models in underdetermined settings and to optimize the design of costly perturbation experiments. Benchmarked on a database of human pathways, full network identifiability is obtained with less than a third of the perturbations that are needed in random experimental designs. Finally, the thesis presents mathematical advances within Modular Response Analysis (MRA), which is a popular framework to quantify network interaction strengths. It is shown that MRA can be approximated as an analytically solvable total least squares problem. This insight drastically reduces computational complexity, which allows to model much bigger networks and to handle novel large-scale perturbation data.
376

Web Migration Revisited: Addressing Effort and Risk Concerns

Heil, Sebastian 25 February 2021 (has links)
Web Systems are widely used and accepted due to their advantages over traditional desktop applications. Modernization of existing non-Web software towards the Web, however, is a complex and challenging task due to Legacy System characteristics. Independent Software Vendors are struggling to commence Web Migration because of the involved effort and risk. Through systematic field research and problem analysis, this situation is further analyzed, deriving a set of requirements that represent the effort and risk concerns and are used to assess the state of the art in the field. Existing Web Migration research exhibits gaps concerning dedicated approaches for the initial phase and feasibility of the proposed strategies with limited resources and expertise. This thesis proposes a solution to address the shortcomings above and support Independent Software Vendors to commence Web Migration, focusing on effort and risk. The main idea is to provide a set of dedicated solutions to close the identified gaps in the form of a methodology and a supporting toolsuite that transfer paradigms successfully solving similar problems in other areas of computer science into the Web Migration domain. These solutions constitute the proposed approach called Agile Web Migration for SMEs (AWSM), consisting of methods, tools, principles, and formalisms for reverse engineering, risk management, customer impact control, and migration strategy selection. The thesis describes the research on the devised ideas in the context of a collaboration project with an Independent Software Vendor. Applicability and feasibility of the concepts are demonstrated in several evaluation experiments, integrating empirical user studies and objective measurements. The thesis concludes with an evaluation based on requirements assessment and application of the solutions in the application scenario, and it provides an outlook towards future work.:1 Introduction 2 Requirements Analysis 3 State of the Art 4 Addressing Effort and Risk Concerns in Web Migration 5 AWSM Reverse Engineering Method 6 AWSM Risk Management Method 7 AWSM Customer Impact Control Method 8 Evaluation 9 Conclusion and Outlook / Web-basierte Software-Systeme werden weithin verwendet und akzeptiert aufgrund ihrer Vorteile gegenüber traditionellen Desktopanwendungen. Die Modernisierung von Nicht-Web-Software zu Web-Software stellt jedoch wegen der Charakteristika von Legacy-Systemen eine komplexe und herausfordernde Aufgabe dar. Unabhängigen Softwareproduzenten (Independent Software Vendors) fällt es schwer, Web Migration zu initiieren aufgrund des damit einhergehenden Aufwands und Risikos. Durch systematische Primärerhebungen und Problemanalyse wird diese Situation weitergehend untersucht und eine Reihe von Anforderungen abgeleitet, welche die Aufwands- und Riskobedenken repräsentieren und verwendet werden, um den Stand der Technik in diesem Gebiet zu bewerten. Existierende Web Migration Forschung weist Mängel hinsichtlich von dedizierten Ansätzen für die initiale Phase und der Machbarkeit der vorgeschlagenen Strategien mit begrenzten Ressourcen und begrenzter Expertise auf. Diese Dissertation schlägt eine Lösung für die oben ausgeführten Mängel vor, um unabhängige Softwareproduzenten bei der Initiierung einer Web Migration zu unterstützen, welche sich auf ihre Bedenken bezüglich des Aufwands und Risikos fokussiert. Die Grundidee ist es eine Sammlung von dedizierten Lösungen für die identifizierten Mängel in Form einer Methodologie und einer Reihe von unterstützenden Werkzeugen anzubieten, welche Paradigmen, die erfolgreich ähnliche Probleme in anderen Gebieten der Informatik lösen konnten, in die Web Migration Domäne transferieren. Diese Lösungen ergeben den vorgeschlagenen Ansatz, Agile Web Migration for SMEs (AWSM), welcher aus Methoden, Werkzeugen, Prinzipien und Formalismen für Reverse Engineering, Riskomanagement, Customer Impact Control und Migrationsstrategieauswahl bestehen. Die Dissertation beschreibt die Forschung an den im Rahmen einer Industriekooperation mit einem unabhängigen Softwareproduzenten entwickelten Ideen. Anwendbarkeit und Machbarkeit der Konzepte werden in mehreren Evaluationsexperimenten, welche empirische Nutzerstudien mit objektiven Messungen verbinden, demonstriert. Die Dissertation schließt mit einer bewertenden Evaluation basierend auf den Anforderungen und auf dem Einsatz der Lösungen im Anwendungsszenario, sowie einem Ausblick auf weiterführende Arbeiten.:1 Introduction 2 Requirements Analysis 3 State of the Art 4 Addressing Effort and Risk Concerns in Web Migration 5 AWSM Reverse Engineering Method 6 AWSM Risk Management Method 7 AWSM Customer Impact Control Method 8 Evaluation 9 Conclusion and Outlook
377

A Feasibility Study of an Automated Repair Process using Laser Metal Deposition (LMD) with a Machine Integrated Component Measuring Solutio

Säger, Florian January 2019 (has links)
The repair of worn or damaged components is becoming more attractive to manufacturers, since it enables them to save resources, like raw material and energy. With that costs can be reduced, and profit can be maximised. When enabling the re-use of components, the lifetime of a component can be extended, which leads to improved sustainability measures. However, repair is not applied widely, mainly because costs of repairing are overreaching the costs of purchasing a new component. One of the biggest expense factors of repairing a metal component is the labourintense part of identifying and quantifying worn or damages areas with the use of various external measurement systems. An automated measuring process would reduce application cost significantly and allow the applications to less cost intense component. To automate the repair process, in a one-machine solution, it is prerequisite that a measuring device is included in the machine enclosure. For that, different measuring solutions are being assessed towards applicability on the “Trumpf TruLaser Cell 3000 Series”. A machine that uses the Laser Metal Deposition (LMD) technology to print, respectively weld, metal on a target surface. After a theoretical analysis of different solutions, the most sufficient solution is being validated by applying to the machine. During the validation a surface models from a test-component is generated. The result is used to determine the capability of detecting worn areas by doing an automated target-actual comparison with a specialised CAM program. By verifying the capability of detecting worn areas and executing a successful repair, the fundamentals of a fully automated repair process can be proven as possible in a one-machine solution. / Tillverkare har börjat se stora möjligheter i att reparera slitna eller skadade komponenter som ett sätt att spara resurser, så som råmaterial och energi. Med den besparingen minskar kostnaderna och vinsten kan således maximeras. Reparation möjliggör även återanvändning av komponenter, vilket förlänger komponentens livslängd och leder till förbättrade hållbarhetsåtgärder. Dock tillämpas reparation inte i någon stor utsträckning i nuläget, främst eftersom kostnaderna för reparation överstiger kostnaderna för att köpa en ny komponent. En av de största kostnaderna för att reparera en metallkomponent är att identifiera och kvantifiera slitna eller skadade områden med hjälp av olika externa mätsystem, som är en väldigt arbetsintensiv process. En automatiserad mätprocess skulle minska avsökningskostnaden avsevärt och således reducera den totala kostnaden för komponenten. För att möjliggöra en automatiserad reparationsprocess i en enda maskinlösning är det en förutsättning att en mätanordning ingår i maskinhöljet. Därför har olika mätningslösningar utvärderats med avseende på användbarhet i "TRUMPF TruLaser Cell 3000 Series", vilket är en maskin som använder Laser Metall Deposition-teknik (LMD-teknik) för att skriva ut och svetsa metall på en definierad yta. En teoretisk analys av olika lösningar har utförts, där den teoretiskt mest lämpliga lösningen validerades genom att appliceras till maskinen. Valideringen genererade en modell av ytan av en testkomponent. Sedan utfördes en automatiserad, målrelaterad jämförelse med ett specialiserat CAM-program baserat på modellresultatet, för att bestämma möjligheten att upptäcka slitna områden. Genom att verifiera förmågan att upptäcka slitna områden samt genomförandet av en lyckad reparation kan grunden för en helt automatiserad reparationsprocess bevisas som möjlig i en enda maskinlösning. / Das reparieren von abgenutzten oder beschädigten Komponenten wird immer attraktiver für Hersteller. Es ermöglicht es Ressourcen einzusparen wie beispielsweise Rohmaterial und Energie, was die Lebenszeit einer Komponente verlängert und damit die Nachhaltigkeit verbessert. Allerdings ist Reparieren nach wie vor nicht weit verbreitet, hauptsächlich dadurch bedingt, dass die Reparaturkosten die Kosten für eine neue Komponente übersteigen. Einer der größten Kostenfaktoren des reparieren einer Metallkomponente ist der Arbeitsintensive Teil der Identifizierung und Quantifizierung des abgenutzten oder beschädigten Bereichs mit verschiedensten externen Vermessung Systemen. Ein automatisierter Vermessungsprozess würde die Kosten signifikant reduzieren und neue Applikationen ermöglichen. Das automatisieren der gesamte Prozesskette – in einer Single-Maschinenlösung – erfordert, dass eine Messeinrichtung im Bearbeitungsraum der Maschine angebracht wird. Dafür werden verschiedene Lösungen nach Anwendbarkeit an der Trumpf Laser Cell 3000 Serie hin beurteilt. Eine Maschine, welche Laser Metal Deposition (LMD) als Technologie anwendet um Material auf Oberflächen aufzubringen. Nach einer theoretischen Analyse verschiedener Lösungen wird die beste Lösung va durch anbringen an die Maschine validiert. Bei der Validierung wird ein Oberflächenmodel erzeugt. Das Ergebnis wird dann genutzt um die Fähigkeit zu belegen, dass beschädigte Stellen, durch einen Soll-Ist-Vergleich in einem speziellen CAM Programm, automatisch detektiert werden können. Basierend auf diesem Beleg und mit dem Ergebnis eine Komponente erfolgreich reparieren zu können, gilt die These eines automatisierten Reparaturprozesses in einer Single-Maschinenlösung als beweisen.
378

Analysing artefacts dependencies to evolving software systems

Jaafar, Fehmi 08 1900 (has links)
Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues. Lorsque les logiciels évoluent, leurs architectures ont tendance à se dégrader et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. En effet, les architectures de ces logiciels deviennent plus complexes et plus difficiles à maintenir à cause des nombreuses dépendances entre les artefacts. Par conséquent, les développeurs doivent comprendre les dépendances entre les artefacts des logiciels pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent la dégradation des architectures des logiciels. D'une part, le maintien d'un logiciel sans la compréhension des les dépendances entre ses artefacts peut conduire à l'introduction de défauts. D'autre part, lorsque les développeurs manquent de connaissances sur l'impact de leurs activités de maintenance, ils peuvent introduire des défauts de conception, qui ont un impact négatif sur l'évolution du logiciel. Ainsi, les développeurs ont besoin de mécanismes pour comprendre comment le changement d'un artefact impacte le reste du logiciel. Dans cette thèse, nous proposons trois contributions principales : La spécification de deux nouveaux patrons de changement et leurs utilisations pour fournir aux développeurs des informations utiles concernant les dépendances de co-changement. La spécification de la relation entre les patrons d'évolutions des artefacts et les fautes. La découverte de la relation entre les dépendances des anti-patrons et la prédisposition des différentes composantes d'un logiciel aux fautes. / Program maintenance accounts for the largest part of the costs of any program. During maintenance activities, developers implement changes (sometimes simultaneously) on artefacts to fix bugs and to implement new requirements. Thus, developers need knowledge to identify hidden dependencies among programs artefacts and detect correlated artefacts. As programs evolved, their designs become more complex over time and harder to change. In the absence of the necessary knowledge on artefacts dependencies, developers could introduce design defects and faults that causes development and maintenance costs to rise. Therefore, developers must understand the dependencies among program artefacts and take proactive steps to facilitate future changes and minimize fault proneness. On the one hand, maintaining a program without understanding the different dependencies between their artefacts may lead to the introduction of faults. On the other hand, when developers lack knowledge about the impact of their maintenance activities, they may introduce design defects, which have a negative impact on program evolution. Thus, developers need mechanisms to understand how a change to an artefact will impact the rest of the programs artefacts and tools to detect design defects impact. In this thesis, we propose three principal contributions. The first contribution is two novel change patterns to model new co-change and change propagation scenarios. We introduce the Asynchrony change pattern, corresponding to macro co-changes, i.e., of files that co-change within a large time interval (change periods), and the Dephase change pattern, corresponding to dephase macro co-changes, i.e., macro co-changes that always happen with the same shifts in time. We present our approach, named Macocha, and we show that such new change patterns provide interesting information to developers. The second contribution is proposing a novel approach to analyse the evolution of different classes in object-oriented programs and to link different evolution behaviour to faults. In particular, we define an evolution model for each class to study the evolution and the co-evolution dependencies among classes and to relate such dependencies with fault-proneness. The third contribution concerns design defect dependencies impact. We propose a study to mine the link between design defect dependencies, such as co-change dependencies and static relationships, and fault proneness. We found that the negative impact of design defects propagate through their dependencies. The three contributions are evaluated on open-source programs.
379

Trustworthiness, diversity and inference in recommendation systems

Chen, Cheng 28 September 2016 (has links)
Recommendation systems are information filtering systems that help users effectively and efficiently explore large amount of information and identify items of interest. Accurate predictions of users' interests improve user satisfaction and are beneficial to business or service providers. Researchers have been making tremendous efforts to improve the accuracy of recommendations. Emerging trends of technologies and application scenarios, however, lead to challenges other than accuracy for recommendation systems. Three new challenges include: (1) opinion spam results in untrustworthy content and makes recommendations deceptive; (2) users prefer diversified content; (3) in some applications user behavior data may not be available to infer users' preference. This thesis tackles the above challenges. We identify features of untrustworthy commercial campaigns on a question and answer website, and adopt machine learning-based techniques to implement an adaptive detection system which automatically detects commercial campaigns. We incorporate diversity requirements into a classic theoretical model and develop efficient algorithms with performance guarantees. We propose a novel and robust approach to infer user preference profile from recommendations using copula models. The proposed approach can offer in-depth business intelligence for physical stores that depend on Wi-Fi hotspots for mobile advertisement. / Graduate / 0984 / cchenv@uvic.ca
380

Bezpečnostní analýza virtuální reality a její dopady / Security Analysis of Immersive Virtual Reality and Its Implications

Vondráček, Martin January 2019 (has links)
Virtuální realita je v současné době využívána nejen pro zábavu, ale i pro práci a sociální interakci, kde má soukromí a důvěrnost informací vysokou prioritu. Avšak bohužel, bezpečnostní opatření uplatňovaná dodavateli softwaru často nejsou dostačující. Tato práce přináší rozsáhlou bezpečnostní analýzu populární aplikace Bigscreen pro virtuální realitu, která má více než 500 000 uživatelů. Byly využity techniky analýzy síťového provozu, penetračního testování, reverzního inženýrství a dokonce i metody pro application crippling. Výzkum vedl k odhalení kritických zranitelností, které přímo narušovaly soukromí uživatelů a umožnily útočníkovi plně převzít kontrolu nad počítačem oběti. Nalezené bezpečnostní chyby umožnily distribuci škodlivého softwaru a vytvoření botnetu pomocí počítačového červa šířícího se ve virtuálních prostředích. Byl vytvořen nový kybernetický útok ve virtální realitě nazvaný Man-in-the-Room. Dále byla objevena bezpečnostní chyba v Unity engine. Zodpovědné nahlášení objevených chyb pomohlo zmírnit rizika pro více než půl milionu uživatelů aplikace Bigscreen a uživatele všech dotčených aplikací v Unity po celém světě.

Page generated in 0.5561 seconds