Spelling suggestions: "subject:"platforms"" "subject:"flatforms""
261 |
The design and implementation of a security and containment platform for peer-to-peer media distribution / Die ontwerp en implimentasie van ’n sekure en begeslote platvorm vir portuurnetwerk mediaverspreidingStorey, Quiran 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The way in which people consume video is changing with the adoption of
new technologies such as tablet computers and smart televisions. These new
technologies, along with the Internet, are moving video distribution away from
satellite and terrestrial broadcast to distribution over the Internet. Services
online now offer the same content that originally was only available on satellite
broadcast television. However, these services are only viable in countries with
high speed, inexpensive Internet bandwidth. The need therefore exists for
alternative services to deliver content in countries where bandwidth is still
expensive and slow. These include many of the developing nations of Africa.
In this thesis we design and develop a video distribution platform that
relies on peer-to-peer networking to deliver high quality video content. We use
an existing video streaming peer-to-peer protocol as the primary distribution
mechanism, but allow users to share video over other protocols and services.
These can include BitTorrent, DC++ and users sharing hard drives with one
another. In order to protect the video content, we design and implement a
security scheme that prevents users from pirating video content, while allowing easy distribution of video data. The core of the security scheme requires a low
bandwidth Internet connection to a server that streams keys to unlock the
video content. The project also includes the development of a custom video
player application to integrate with the security scheme.
The platform is not limited to, but is aimed at high speed local area networks
where bandwidth is free. In order for the platform to support feasible
business models, we provision additional services, such as video cataloging
and search, video usage monitoring and platform administration. The thesis
includes a literature study on techniques and solutions to secure video entertainment,
specifically in a peer-to-peer environment. / AFRIKAANSE OPSOMMING: Die wyse waarvolgens mense video verbruik is aan die verander met die ingebruikneming
van nuwe tegnologie soos tabletrekenaars en slim televisiestelle.
Hierdie nuwe tegnologie tesame met die Internet maak dat die verspreiding
van video al hoe minder plaasvind deur middel van satellietuitsendings en al
hoe meer versprei word deur die Internet. Aanlyn-Internetdienste bied deesdae
dieselfde inhoud aan as wat voorheen slegs deur beeldsending versprei is.
Hierdie dienste is egter slegs lewensvatbaar in lande met hoëspoed- en goedkoop
Internetbandwydte. Daar is dus ’n behoefte aan alternatiewe tot hierdie
dienste in lande waar bandwydte steeds duur en stadig is. Baie lande in Afrika
kan in hierdie kategorie ingesluit word.
In hierdie tesis word ’n videoverspreidingsplatform ontwerp en ontwikkel,
wat van portuurnetwerke gebruik maak om hoëkwaliteit-beeldmateriaal te versprei.
Die stelsel gebruik ’n bestaande portuurnetwerk-datavloeiprotokol as
die premêre verspreidingsmeganisme, maar laat gebruikers ook toe om videoinhoud
direk met ander gebruikers en dienste te deel. BitTorrent, DC++ en
gebruikers wat hardeskywe met mekaar deel word hierby ingesluit. Ten einde die videoinhoud te beskerm ontwerp en implimenteer ons ’n sekuriteitstelsel
wat verhoed dat gebruikers die videoinhoud onregmatig kan toe-eien, maar
wat terselfdertyd die verspreiding van die data vergemaklik. Hierdie sluit die
ontwikkeling van ’n pasgemaakte videospeler in. Die kern van die sekuriteitstelsel
benodig ’n lae-bandwydte-Internetverbinding na ’n bediener wat sleutels
uitsaai om die videoinhoud te ontsluit.
Alhoewel nie daartoe beperk nie, is die platform gemik op hoëspoed-plaaslikegebiedsnetwerke
met gratis bandwydte. Om die platvorm aan ’n haalbare
sakemodel te laat voldoen het ons vir addisionele dienste soos videokatalogisering
met soekfunksies, videoverbruikersmonitering en platvormadministrasie
voorsiening gemaak. Die tesis sluit ’n literatuurstudie oor tegnieke en oplossings
vir die beskerming van video data, spesifiek in die portuurnetwerke
omgeving, in.
|
262 |
Transaktionsplattformar i B2B-kontexter : Hur vill potentiella användare att plattformen ska se ut? / Transaction platforms in B2B contexts : How do potential users want the platform to be designed?Sörgård, Fred, Artursson, Filip January 2022 (has links)
Syfte: Det övergripande syftet med uppsatsen är att förstå hur transaktionsplattformar ska lyckas i tidiga stadier. Detta genom att förstå hur en transkansaktionsplattform ska utformas för att kunna attrahera många användare. Metod: Denna studie har använt en kvalitativ forskningsansats där intervjuerna genomfördes i tre faser. I studien genomfördes intervjuer med 24 svenska bolag och organisationer verksamma i energibranschen. Studien intervjuade två olika användargrupper vilket var beställare och utförare. Utöver det genomfördes även intervjuer med branschkunniga aktörer. Resultat: Resultatavsnittet är indelat i två delar där första delen handlar om att skapa förståelse för sju faktorer som påverkar intentionen för potentiella användare att adoptera en transaktionsplattform. Här presenteras på vilket sätt dessa faktorer påverkar adoptionsbeslutet samt vilka handlingar som plattformsägaren kan vidta för att öka deras intention att adoptera. I andra delen av resultatavsnittet presenteras ett ramverk som plattformsägaren kan använda i utformningsstadiet. Ramverket handlar om att identifiera intressanta användargrupper samt adressera faktorer som påverkar adoptionsbeslutet. Teoretiskt och praktiskt bidrag: Befintlig litteratur har sällan i samma studie inkluderat flera faser av plattformars livscykel, utan enbart fokuserat på en fas åt gången. Denna studie bidrar teoretiskt genom att undersöka hur utformningsfasen påverkar adoptionsfasen för transaktionsplattformar. Dessutom bidrar denna studie till litteraturen genom att undersöka tidiga stadier för transaktionsplattformar. Studien fokuserar på utformningsfasen, vilket sällan undersökts i befintlig litteratur. Studien bidrar även praktiskt genom att presentera ett ramverk som kan hjälpa chefer eller entreprenörer vid utformningen av en ny transaktionsplattform. Användning av ramverket kan underlätta hanteringen av höna-och-ägg-problemet som nya transaktionsplattformar alltid möts av. Begränsningar och framtida forskning: Det finns framför allt två begränsningar med denna studie, vilket öppnar upp för framtida studier. Den första är att den undersökta intervjugruppen är relativt snäv vilket kan leda till branschspecifika resultat. Framtida studier kan därmed undersöka andra branscher och jämföra mot denna studies resultat för att öka generaliserbarheten. Den andra begränsningen är att denna studie enbart intervjuat potentiella användare av transaktionsplattformar och inte nuvarande användare – det vill säga adoptörer. Därmed var intervjuundersökning relativt hypotetisk och mycket fokus lades på vad intervjuobjekten kunde tänka sig. Framtida studier kan därmed även intervjua nuvarande användare av en transaktionsplattform för att vidareutveckla resultatet. / Purpose: The main purpose of the thesis is to understand how transaction platforms can become successful in early phases. This will be achieved by understanding how a transaction platform should be designed in order to attract many users to the platform. Method: This study had a qualitative research approach, and the interviews were performed in three phases. Interviews were made with 24 companies and organizations within the energy industry. Two different user groups were interviews, which was buyers and sellers. Besides that, people with high knowledge of the industry were interviewed. Result: The result section is divided in two parts, where the first part intends to create knowledge about seven factors that affects the intention for potential users to adopt a transaction platform. It is described in what way these factors affect the adoption decision and which action the platform owner can take to increase the intention to adopt. In the second part of the result section, a framework is presented that platform owners can use in the design phase. The framework is about identifying interesting user groups and addressing factors that affects the adoption decision. Practical and theoretical contribution: Existing literature has rarely focused on including several phases of the platform's life cycle. Previous studies has therefore only focused on one phase at a time. This study contributes to the literature by investigating how the design phase affects the adoption phase for transaction platforms. Additionally, this study contributes to the literature by investigating the early phase of a platform, which is rare to find in the existing literature. The study contributes practically by providing a framework that can help decision-makers and entrepreneurs to design a new transactional platform. Using the framework can help managers to cope the chicken-and-egg dilemma that new platforms often face. Limitations and future studies: This study has two limitations that opens for future studies. Firstly, our interview group is relatively narrowed, which can lead to industry-specific findings. Future studies can therefore investigate other industries to compare those findings with this study to identify more general findings. The second limitation is that this study has exclusively done interviews with potential users. Therefore, the interview study was relatively hypothetical, and a lot of focus was on what the interviewees could imagine. Future studies can therefore do interviews with users of an existing transaction platform to develop our framework more.
|
263 |
Communication centric platforms for future high data intensive applicationsAhmad, Balal January 2009 (has links)
The notion of platform based design is considered as a viable solution to boost the design productivity by favouring reuse design methodology. With the scaling down of device feature size and scaling up of design complexity, throughput limitations, signal integrity and signal latency are becoming a bottleneck in future communication centric System-on-Chip (SoC) design. This has given birth to communication centric platform based designs. Development of heterogeneous multi-core architectures has caused the on-chip communication medium tailored for a specific application domain to deal with multidomain traffic patterns. This makes the current application specific communication centric platforms unsuitable for future SoC architectures. The work presented in this thesis, endeavours to explore the current communication media to establish the expectations from future on-chip interconnects. A novel communication centric platform based design flow is proposed, which consists of four communication centric platforms that are based on shared global bus, hierarchical bus, crossbars and a novel hybrid communication medium. Developed with a smart platform controller, the platforms support Open Core Protocol (OCP) socket standard, allowing cores to integrate in a plug and play fashion without the need to reprogram the pre-verified platforms. This drastically reduces the design time of SoC architectures. Each communication centric platform has different throughput, area and power characteristics, thus, depending on the design constraints, processing cores can be integrated to the most appropriate communication platform to realise the desired SoC architecture. A novel hybrid communication medium is also developed in this thesis, which combines the advantages of two different types of communication media in a single SoC architecture. The hybrid communication medium consists of crossbar matrix and shared bus medium . Simulation results and implementation of WiMAX receiver as a real-life example shows a 65% increase in data throughput than shared bus based communication medium, 13% decrease in area and 11% decrease in power than crossbar based communication medium. In order to automate the generation of SoC architectures with optimised communication architectures, a tool called SOCCAD (SoC Communication architecture development) is developed. Components needed for the realisation of the given application can be selected from the tool’s in-built library. Offering an optimised communication centric placement, the tool generates the complete SystemC code for the system with different interconnect architectures, along with its power and area characteristics. The generated SystemC code can be used for quick simulation and coupled with efficient test benches can be used for quick verification. Network-on-Chip (NoC) is considered as a solution to the communication bottleneck in future SoC architectures with data throughput requirements of over 10GB/s. It aims to provide low power, efficient link utilisation, reduced data contention and reduced area on silicon. Current on-chip networks, developed with fixed architectural parameters, do not utilise the available resources efficiently. To increase this efficiency, a novel dynamically reconfigurable NoC (drNoC) is developed in this thesis. The proposed drNoC reconfigures itself in terms of switching, routing and packet size with the changing communication requirements of the system at run time, thus utilising the maximum available channel bandwidth. In order to increase the applicability of drNoC, the network interface is designed to support OCP socket standard. This makes drNoC a highly reuseable communication framework, qualifying it as a communication centric platform for high data intensive SoC architectures. Simulation results show a 32% increase in data throughput and 22-35% decrease in network delay when compared with a traditional NoC with fixed parameters.
|
264 |
Vztahy mediálních organizací a digitálních zprostředkovatelů / Relations between media organizations and digital intermediariesBešťák, Václav January 2019 (has links)
The diploma thesis focuses on the relations between digital intermediaries and media organizations on the Czech market. Its goal is to describe a new, ever-evolving environment, which has been affected by the swift rise of digital platforms and by the change in business models of publishers. The thesis puts this phenomenon in the context of media studies and critical political economy and defines the terms "digital platforms" and "digital intermediaries". It also looks more closely at the current challenges related with the content consumption in the digital environment - which includes a discussion of copyright, changes in user behavior and content monetization. The research part is based on individual in-depth interviews with members of digital intermediaries, media organizations and third parties. The analysis takes place in a general, ethical/normative point of view and aims for understanding how the parties interact and work together, how they perceive this environment, how they see its changes and future development.
|
265 |
Uma plataforma tecnológica para organizações associativas cibernéticas : Escritório da Resiliência Hídrica /Rodrigues, Carlos Diego de Souza January 2019 (has links)
Orientador: Jefferson Nascimento de Oliveira / Resumo: Os direitos da natureza são indispensáveis para a harmonia nos espaços de atuação e desenvolvimento da vida, onde o uso e a ocupação do solo impactam diretamente na disponibilidade e qualidade de recursos fundamentais como a água e outros bens comuns. Com a observação de iniciativas transnacionais, governamentais, laboratórios de ciência aberta, empresas e ONGs, esta pesquisa exploratória consolida cenários sobre intensos fluxos de multidões à deriva de projeções e sobre a capacidade adaptativa de aglomerados no Antropoceno. São elementos onde a Internet e os paradigmas do serviço total incitam plataformas digitais para novos produtos e serviços, adequadas à realidade dos jogos sociais contemporâneos. Com bases e referenciais em governança eletrônica para as águas, os resultados das explorações resultam na descoberta das organizações associativas cibernéticas (cyorgs) e as características fundamentais dos Escritórios da Resiliência Hídrica. Amparados por espaços antropológicos, de interação e implementações estratégicas de inovação em sustentabilidade, os produtos constroém a plataforma ÁguasML - Bem Comum em Mídia Livre, implementada digitalmente com código aberto via portais de notícias, ambientes de aprendizagem, automatizações e aplicativos para coleta e distribuição de dados. Apontam também alguns dos componentes das plataformas hidrotecnológicas nos Escritórios da Resiliência Hídrica, assim como os conteúdos, as experiências e as características de tecnologias resilient... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The rights of nature are indispensable for harmony in the spaces of participation and development of life, where the use and occupation of the lands directly affect the availability and quality of water and other commons. Observing transnational initiatives, governments, open science laboratories, corporations and NGOs, this exploratory research consolidates scenarios of intense flows of crowds drifting from projections and adaptive capacity in the Anthropocene. They are elements in which the internet and the paradigms of the total service incite digital platforms for new products and services, adapted to the mechanized reality of contemporary social games. With these baselines and benchmarks in electronic governance for water, the results of explorations are the discovery of cybernetic associative organizations (cyorgs) and the fundamental characteristics of Water Resilience Office´s. Based on anthropological spaces, spaces of interaction and strategic implementations of innovation in water sustainability, the work build the ÁguasML - Commons in Open Media platform, digitally implemented with open source through news portals, e-learning environments, automations and applications for collection and data distribution. They also show some of the components of hydrotechnology platforms at the offices for water resilience, as well as the contents, experiences and characteristics of resilient technologies in situations of water scarcity and vulnerability of rights. This office gen... (Complete abstract click electronic access below) / Mestre
|
266 |
Um framework para agrupar funções com base no comportamento da comunicação de dados em plataformas multiprocessadas / A framework for clustering functions based on the behavior of data communication on multiprocessed platformsSantos, Rafael Ribeiro dos 12 June 2018 (has links)
O aumento da demanda por sistemas computacionais mais eficientes para obter alto desempenho impôs novos desafios à comunidade de pesquisa, que precisou buscar por novas plataformas heterogêneas para grandes aplicações. Para utilizar todo o potencial dessas plataformas, podese agrupar a aplicação em grupos menores de modo que cada grupo seja executado em uma unidade de processamento específica, para reduzir o gargalo de comunicação, de acordo com o comportamento de comunicação durante a execução da aplicação. Com o propósito de oferecer um agrupamento mais eficiente, este projeto propõe a análise de agrupamento de uma aplicação levando em consideração não só o volume total de dados, mas também a distribuição desse volume durante o tempo de execução associado à restrição da banda e da taxa de transmissão. Embora alguns trabalhos considerem o volume total de dados para o agrupamento, não é evidenciado como esse volume é distribuído e como a restrição de banda afeta o agrupamento. Assim, neste projeto foi implementado um framework para sugerir um agrupamento considerando a distribuição do volume de comunicação e restrições de banda. Além disso, foi desenvolvido um módulo de extensão para a ferramenta externa MCProf (Memory and Communication Profiler) com o objetivo de obter a distribuição do volume de comunicação. A validação do framework foi realizada por meios de testes de agrupamentos de aplicações nos quais foram comparados o tempo de comunicação do agrupamento gerado pela execução do framework em relação aos resultado dos agrupamentos considerando os trabalhos da literatura. O uso desta abordagem apresentou um aumento no desempenho que variou de 1,117X a 2,621X para as aplicações usadas nos experimentos. / The increased demand for more efficient computing systems to achieve high performance proposed new challenges to the research community, which needed to search for new heterogeneous platforms for large applications. To utilize the full potential of these platforms, the application can be grouped into small groups that runs on a specific processing unit to reduce the communication bottleneck according to the communication behavior during application execution . With the purpose of offering a more efficient clustering, this project proposes the analysis of clustering of an application taking into account not only the total volume of data, but also the distribution of that volume during the execution time associated to the band and restriction of rate transmission. Although some studies consider the total volume of data for the cluster, it is not clear how this volume is distributed and how the band constraint affects clustering. Thus, in this project was implemented a framework to suggest a cluster considering the distribution of the volume of communication and band restrictions. In addition, an extension module was developed for the external tool MCProf (Memory and Communication Profiler) in order to obtain the distribution of the communication. The validation of the framework was performed by clsutering tests which used applications in which the communication time of the cluster generated by the execution of framework was compared to the results of the clusters considering the literature. The use of this approach showed an increase in performance ranging from 1.117X to 2.621X for the applications used in the experiments.
|
267 |
Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid ComputingAliaga, Alvaro Henry Mamani 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
|
268 |
Early evaluation of multicore systems soft error reliability using virtual platforms / Avaliação de sistema de larga escala sob à influência de falhas temporárias durante a exploração de inicial projetos através do uso de plataformas virtuaisRosa, Felipe Rocha da January 2018 (has links)
A crescente capacidade de computação dos componentes multiprocessados como processadores e unidades de processamento gráfico oferecem novas oportunidades para os campos de pesquisa relacionados computação embarcada e de alto desempenho (do inglês, high-performance computing). A crescente capacidade de computação progressivamente dos sistemas baseados em multicores permite executar eficientemente aplicações complexas com menor consumo de energia em comparação com soluções tradicionais de núcleo único. Essa eficiência e a crescente complexidade das cargas de trabalho das aplicações incentivam a indústria a integrar mais e mais componentes de processamento no mesmo sistema. O número de componentes de processamento empregados em sistemas grande escala já ultrapassa um milhão de núcleos, enquanto as plataformas embarcadas de 1000 núcleos estão disponíveis comercialmente. Além do enorme número de núcleos, a crescente capacidade de processamento, bem como o número de elementos de memória interna (por exemplo, registradores, memória RAM) inerentes às arquiteturas de processadores emergentes, está tornando os sistemas em grande escala mais vulneráveis a erros transientes e permanentes. Além disso, para atender aos novos requisitos de desempenho e energia, os processadores geralmente executam com frequências de relógio agressivos e múltiplos domínios de tensão, aumentando sua susceptibilidade à erros transientes, como os causados por efeitos de radiação. A ocorrência de erros transientes pode causar falhas críticas no comportamento do sistema, o que pode acarretar em perdas de vidas financeiras ou humanas. Embora tenha sido observada uma taxa de 280 erros transientes por dia durante o voo de uma nave espacial, os sistemas de processamento que trabalham à nível do solo devem experimentar pelo menos um erro transiente por dia em um futuro próximo. A susceptibilidade crescente de sistemas multicore à erros transientes necessariamente exige novas ferramentas para avaliar a resiliência à erro transientes de componentes multiprocessados em conjunto com pilhas complexas de software (sistema operacional, drivers) durante o início da fase de projeto. O objetivo principal abordado por esta Tese é desenvolver um conjunto de técnicas de injeção de falhas, que formam uma ferramenta de injeção de falha. O segundo objetivo desta Tese é estabelecer as bases para novas disciplinas de gerenciamento de confiabilidade considerando erro transientes em sistemas emergentes multi/manycore utilizando aprendizado de máquina. Este trabalho identifica multiplicas técnicas que podem ser usadas para fornecer diferentes níveis de confiabilidade na carga de trabalho e na criticidade do aplicativo. / The increasing computing capacity of multicore components like processors and graphics processing unit (GPUs) offer new opportunities for embedded and high-performance computing (HPC) domains. The progressively growing computing capacity of multicore-based systems enables to efficiently perform complex application workloads at a lower power consumption compared to traditional single-core solutions. Such efficiency and the ever-increasing complexity of application workloads encourage industry to integrate more and more computing components into the same system. The number of computing components employed in large-scale HPC systems already exceeds a million cores, while 1000-cores on-chip platforms are available in the embedded community. Beyond the massive number of cores, the increasing computing capacity, as well as the number of internal memory cells (e.g., registers, internal memory) inherent to emerging processor architectures, is making large-scale systems more vulnerable to both hard and soft errors. Moreover, to meet emerging performance and power requirements, the underlying processors usually run in aggressive clock frequencies and multiple voltage domains, increasing their susceptibility to soft errors, such as the ones caused by radiation effects. The occurrence of soft errors or Single Event Effects (SEEs) may cause critical failures in system behavior, which may lead to financial or human life losses. While a rate of 280 soft errors per day has been observed during the flight of a spacecraft, electronic computing systems working at ground level are expected to experience at least one soft error per day in near future. The increased susceptibility of multicore systems to SEEs necessarily calls for novel cost-effective tools to assess the soft error resilience of underlying multicore components with complex software stacks (operating system-OS, drivers) early in the design phase. The primary goal addressed by this Thesis is to describe the proposal and development of a fault injection framework using state-of-the-art virtual platforms, propose set of novel fault injection techniques to direct the fault campaigns according to with the software stack characteristics, and an extensive framework validation with over a million of simulation hours. The second goal of this Thesis is to set the foundations for a new discipline in soft error reliability management for emerging multi/manycore systems using machine learning techniques. It will identify and propose techniques that can be used to provide different levels of reliability on the application workload and criticality.
|
269 |
As cibercidades brasileiras: uma análise do panorama brasileiro de plataformas digitais, através do designSaldanha, Leandra Dezotti 02 April 2013 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-26T16:39:23Z
No. of bitstreams: 1
Leandra Dezotti Saldanha.pdf: 3756101 bytes, checksum: 67c251aec7c68c527f0087f000aa8629 (MD5) / Made available in DSpace on 2015-06-26T16:39:23Z (GMT). No. of bitstreams: 1
Leandra Dezotti Saldanha.pdf: 3756101 bytes, checksum: 67c251aec7c68c527f0087f000aa8629 (MD5)
Previous issue date: 2013-01-31 / Nenhuma / O tema de pesquisa deste projeto de mestrado trata da forma como as cidades contemporâneas estão se desenvolvendo em relação às novas tecnologias de informação e comunicação e como se utilizam das plataformas digitais para valorização da sua imagem, ao inserirem-se no contexto das cibercidades. A investigação desenvolvida é um estudo de caso múltiplo, com foco nas capitais brasileiras e nas plataformas digitais que as representam, sejam estas governamentais, turísticas, sociais ou colaborativas. Essa análise leva em conta os modelos de referência mundiais existentes e os estudos teóricos já desenvolvidos sobre cibercidades, principalmente no que se refere aos modelos de cibercidades, de Lemos (2002) e a arquitetura de plataforma da cibercidade, desenvolvido por Ishida (2002) para Kyoto, de forma a obter subsídios de análise e poder tecer um panorama geral de como as cibercidades se estruturam no Brasil, buscando, através, desses resultados, formatar comparativos e contextualizar os problemas existentes, através da visão do design estratégico, gerando dados para novos estudos de como o design pode atuar na construção estratégica das cidades, no universo digital. / The research topic of this master's project addresses how contemporary cities are developing in relation to new technologies of information and communication and how to use digital platforms for promoting its image, to be inserted in the context of cybercities. The research undertaken is a multiple case study, focusing on the Brazilian capitals and on digital platforms that represent them, whether governmental, tourist, social or collaborative. This analysis takes into account the existing global reference models and theoretical studies already undertaken on cybercities, especially with regard to models cybercities, de Lemos (2002) and platform architecture of the cybercity, developed by Ishida (2002) to Kyoto in order to obtain data and analysis can weave an overview of how cybercities are structured in Brazil, searching through these results, comparative format and contextualize existing problems through the design of strategic vision, generating data for new studies how design can work on building strategic towns in the digital universe.
|
270 |
As narrativas sobre os algoritmos do Facebook : uma análise dos 10 anos do feed de notíciasAraújo, Willian Fernandes January 2017 (has links)
Esta tese acompanha a construção do Feed de Notícias do Facebook ao longo dos seus primeiros 10 anos (2006-2016) com o objetivo de descrever as formas como o mecanismo e a noção de algoritmo são definidos ao longo do período estudado. São analisados os conteúdos digitais, chamados de dispositivos textuais, que compõem publicamente o que o Feed de Notícias é e faz, descrevendo os atores implicados na composição dessa narrativa, mapeando seus objetivos e seus efeitos. A amostra analisada toma como ponto de partida os dispositivos textuais alocados em dois espaços digitais institucionais do Facebook: Facebook Blog e Facebook Newsroom. A partir da leitura de mais de mil publicações digitais do Facebook e de outros agentes (usuários, produtores de conteúdo, imprensa, ativistas etc.), foram selecionadas as publicações mais relevantes ao estudo, escolhidas com ênfase em eventos e circunstâncias de negociação ou mudança. A abordagem aqui construída representa uma composição de perspectivas dos estudos de ciência e tecnologia (STS) e da Teoria Ator-Rede (TAR). Trata-se do conjunto de procedimentos utilizados na descrição do caráter performativo dos textos. Na análise realizada na tese, são identificados três momentos distintos da construção da noção de algoritmo ao longo da trajetória do Feed de Notícias, chamados de Algoritmo Edgerank, Algoritmo Certo e Algoritmo Centrado no Usuário. Ao mesmo tempo, é apresentada a formulação do Feed de Notícias como um fluxo constante. É argumentado que as transformações no mecanismo são orientadas para gerar engajamento e manter usuários conectados ao Facebook. Engajamento é, na racionalidade emergente da construção do Feed de Notícias, uma mercadoria resultante de sua ação. Outra noção relevante decorrente da análise é a ideia de norma algorítmica como lógica normativa de visibilidade que busca regular o relacionamento entre produtores de conteúdo e o mecanismo, punindo os que não seguem as chamadas boas práticas. / This study follows the Facebook News Feed construction throughout its first ten years (2006– 2016). The objective of this research is to describe the way this mechanism and the notion of algorithm were compounded, enacted and transformed during that period. This is achieved through an analysis of the digital content (referred to here as ‘textual devices’) that publicly constructs what the News Feed is and how it functions. This analysis describes the actors involved within this narrative, mapping their objectives and effects. The sample is constructed beginning with the textual devices published on Facebook’s institutional websites: Facebook Blog and Facebook Newsroom. Following the reading of more than 1,000 texts of Facebook and other agents (users, content producers, media, activists, etc.), the most relevant publications were selected, emphasizing situations of change, conflict and controversy. The research approach, which was based on science and technology studies (STS) and actornetwork theory (ANT), involved constructing a body of procedures used to describe the performative character of texts. The current study found that during the development of the News Feed, Facebook’s notion of algorithm has gone through three different phases, referred to here respectively as the Edgerank Algorithm, Right Algorithm and User-centered Algorithm. One of the most interesting findings was that the changes in the News Feed are primarily oriented towards the objective of creating engagement by keeping users connected to Facebook. Engagement is an important commodity within the rationality that emerged from this scenario. It is argued that the News Feed development may be seen as a continuous flow. Another important finding was the notion called algorithmic norm, as a normative logic of visibility that rules the relationship between content producers and the News Feed. The algorithmic norm tends to enact specific judgements and to punish content producers who do not follow what Facebook calls good practices.
|
Page generated in 0.0531 seconds