Spelling suggestions: "subject:"rrm"" "subject:"mfrm""
11 |
Mecanismos de proteção de conteúdo e modelamento DRM em TV digital / Content protection mecanisms and digital TV DRM modelingBarbosa, Ana Claudia Dytz 07 1900 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2008. / Submitted by Ruthléa Nascimento (ruthlea@bce.unb.br) on 2008-10-29T15:04:52Z
No. of bitstreams: 1
2008_AnaClaudiaDytzBarbosa.pdf: 1943246 bytes, checksum: f64c7599d6007db119bf5e9851253d8c (MD5) / Approved for entry into archive by Luanna Maia(luanna@bce.unb.br) on 2009-03-02T15:37:46Z (GMT) No. of bitstreams: 1
2008_AnaClaudiaDytzBarbosa.pdf: 1943246 bytes, checksum: f64c7599d6007db119bf5e9851253d8c (MD5) / Made available in DSpace on 2009-03-02T15:37:46Z (GMT). No. of bitstreams: 1
2008_AnaClaudiaDytzBarbosa.pdf: 1943246 bytes, checksum: f64c7599d6007db119bf5e9851253d8c (MD5) / A presente dissertação traz uma pesquisa bibliográfica ampla referente às técnicas
utilizadas no tratamento dos direitos digitais de acesso e utilização de conteúdos
multimídia.
São apresentados inicialmente os padrões dos Sistemas de Televisão Digital e respectivos
middlewares atualmente empregados no mundo, porém com ênfase no sistema japonês e
europeu por ser o primeiro padrão a base para o atual Sistema de Televisão Digital
Brasileiro e o segundo padrão, por ser atualmente o padrão mais utilizado nesta área. Neste
contexto, serão explorados os processos que compõem a cadeia de valores da Televisão
Digital, partindo desde a fase de produção de conteúdo, passando pelas fases de
programação, distribuição e consumo. Será abordada ainda a tecnologia de IPTV como
alternativa de transporte de conteúdo para TV digital.
Posteriormente são apresentados os principais mecanismos de proteção de conteúdo digital
e sua utilização no contexto de proteção de direitos de acesso ao conteúdo restrito ao longo
da cadeia de valores da Televisão Digital. As técnicas abordadas abrangem a utilização dos
descritores das tabelas do sistema de informação, o processo de criptografia do conteúdo
associada ao sistema de acesso condicional, inserção de informações através de técnicas de
marca d’água digital, geração de licenças criadas a partir de descritores pré-definidos
trazendo informações sobre as condições de acesso, dentre outros.
Finalmente é apresentado um estudo de caso propondo uma arquitetura genérica para o
acesso de conteúdo restrito com direitos digitais associados. A proposta se baseia nas
principais funções de um sistema de acesso a conteúdo restrito que são: Autenticação,
gerenciamento de usuários, tratamento de direitos digitais, contabilização financeira e
autorização para acesso.
__________________________________________________________________________________________ ABSTRACT / Due to the difficulty of finding bibliographic solid references about the multimedia content
protection at the Digital Television environment, the actual work brings a wide
bibliographic research referencing the used technology for the digital access and use rights
of multimedia contents.
Initially the actual world Digital Television Standards and respective middlewares
are presented with emphasis on the Japanese and European systems, as the first one was
used as base to the Brazilian Digital Television and the second one is the most used Digital
Television System in the world.
On this context, the processes that compose the Digital Television value chain are
exploited, from the initial content production phase, passing through programming,
distribution and consumption phases. Also IPTV technology is presented as a content
transport alternative for Digital Television.
The main digital content protection mechanisms used are presented through the
Digital TV value chain. The presented techniques embrace System Information tables
descriptors use, content cryptography related to conditional access system, watermarking
information included in the content itself, license creation based on predefined descriptors
that brings access condition information, among others.
Finally a case study is presented as proposition of a generic architecture to restrict
content access based on digital rights associated to those contents. The proposal is based
on the main functions of restrict content access system which are Authentication, user
management, digital rights, Accounting and access authorization.
|
12 |
The challenges of integrating disaster risk management (DRM), integrated water resources management (IWRM) and autonomous strategies in low-income urban areas : a case study of Douala, CameroonRoccard, Jessica January 2014 (has links)
Climate change affects water resources suitable for human consumption, transforming water quality and quantity. These changes exacerbate vulnerabilities of human society, increasing the importance of adequately protecting and managing water resources and supplies. Growing urban populations provide an additional stress on existing water resources, particularly increasing the vulnerability of people living in poor neighbourhoods. In urban areas, official responses to climate change are currently dominated by Disaster Risk Management (DRM); however, more recently Integrated Water Resources Management (IWRM) has emerged to support the integration of climate change adaptation in water resource planning. Based on a case study of the city of Douala, Cameroon, the thesis examines the operational implementation of both frameworks, combining observations, semi-structured interviews with different stakeholders and a survey carried out in three poor communities. The research highlights the challenges of improving the joining of both frameworks to adequately reach the urban poor, whilst being alert to, and responsive to, the autonomous adaptation strategies the poor autonomously implement and develop. At present, the IWRM and DRM frameworks are implemented separately and do not clearly reach the urban poor who face three major water-related issues (flooding, water-related diseases and water access). Other institutional water-related measures and projects are carried out by authorities in the low-income communities, but the institutions still struggle to manage the delivery of basic services and protect these communities against hazards. The lack of effective outcomes of the institutional water-related measures and projects has led to a strong process of autonomous adaptation by inhabitants of poor communities. Driven by their adaptive capacity supported by the abundance in groundwater resources, they use coping and adaptive strategies to reduce their vulnerability to water-related issues, such as alternative water suppliers. Similarly, the frequency of the flooding hazard has led the urban poor to develop practices to minimise disaster impacts. However, the autonomous strategies developed face limitations caused by the natural and build environment. In this context, the autonomous strategies of the urban poor and the strategies appear to have a strong influence on each other. While institutional projects have initiated spontaneous strategies, other strategies reduce the willingness of the low-income neighbourhoods to participate in the implementation of official, externally derived development projects.
|
13 |
Elektronická kniha na slovenském trhu: Realita nebo fenomén? / Electronic Book on Slovak Market: Realitz versus Fenomia?Vazanová, Zuzana January 2010 (has links)
The main purpose of the diploma thesis is introducing the environment of electronic books and its technology. Development, the whole technical surrounding, and distribution channels used, books` sales are described in this diploma thesis as well . Practical part includes analyzes of situation on the Slovak market from customers` point of view.
|
14 |
Proposition d'architectures radio logicielles fpga pour démoduler simultanément et intégralement les bandes radios commerciales, en vue d'une indexation audio / Proposal of fpga - based software radio architectures for simultaneously and fully demodulating the commercial radio bands, with the aim of doing audio indexingHappi Tietche, Brunel 11 March 2014 (has links)
L'expansion de la radio et le développement de nouveaux standards enrichissent la diversité et la quantité de données contenues sur les ondes de radiodiffusion. Il devient alors judicieux de développer un moteur de recherches qui aurait la capacité de rendre toutes ces données accessibles comme le font les moteurs de recherche sur internet à l'image de Google. Les possibilités offertes par un tel moteur s'il existe sont nombreuses. Ainsi, le projet SurfOnHertz, qui a été lancé en 2010 et s'est terminé en 2013, avait pour but de mettre au point un navigateur qui serait capable d'indexer les flux audios de toutes les stations radios. Cette indexation se traduirait, entre autres, par de la détection de mots clés dans les flux audios, la détection de publicités, la classification de genres musicaux. Le navigateur une fois mis au point deviendrait le premier moteur de recherches de genre à traiter les contenus radiodiffusés. Relever un tel challenge nécessite d'avoir un dispositif pour capter toutes les stations en cours de diffusion dans la zone géographique concernée, les démoduler et transmettre les contenus audios à un moteur d'indexation. Ainsi, les travaux de cette thèse visent à proposer des architectures numériques portées sur une plateforme SDR pour extraire, démoduler, et mettre à disposition le contenu audio de chacune des stations diffusées dans la zone géographique du récepteur. Vu le grand nombre de standards radio existants aujourd'hui, la thèse porte principalement les standards FM et DRM30. Cependant les méthodologies proposées sont extensibles à d'autres standards.C'est à base d'un FPGA que la majeure partie des travaux a été menée. Le choix de ce type de comcomposant est justifié de par les grandes possibilités qu’il offre en termes de parallélisme de traitements, de maitrise de ressources disponibles, et d’embarquabilité. Le développement des algorithmes a été fait dans un souci de minimisation de la quantité de blocs de calculs utilisés. D’ailleurs, bon nombre d’implémentations ont été réalisées sur un Stratix II, technologie aux ressources limitées par rapport aux FPGAs d’aujourd’hui disponibles sur le marché. Cela atteste la viabilité des algorithmes présentés. Les algorithmes proposés opèrent ainsi l’extraction simultanée de tous les canaux radios lorsque les stations ne peuvent occuper que des emplacements uniformément espacés comme la FM en Europe occidentale, et aussi, pour des standards dont la répartition des stations dans le spectre semble plutôt aléatoire comme le DRM30. Une autre partie des discussions porte sur le moyen de les démoduler simultanément. / The expansion of the radio and the development of new standards enrich the diversity and the amount of data carried by the broadcast radio waves. It becomes wise to develop a search engine that has the capacity to make these accessible as do the search engines on the internet like Google. Such an engine can offer many possibilities. In that vein, the SurfOnHertz project, which was launched in 2010 and ended in 2013, aimed to develop a browser that is capable of indexing audio streams of all radio stations. This indexing would result, among others, in the detection of keywords in the audio streams, the detection of commercials, the classification of musical genres. The browser once developed would become the first search engine of its kind to address the broadcast content. Taking up such a challenge requires to have a device to capture all the stations being broadcasted in the geographical area concerned, demodulate them and transmit the audio contents to the indexing engine. Thus, the work of this thesis aim to provide digital architectures carried on a SDR platform for extracting, demodulating, and making available the audio content of each broadcast stations in the geographic area of the receiver. Before the large number of radio standards which exist today, the thesis focuses FM and DRM30 standards. However the proposed methodologies are extensible to other standards. The bulk of the work is FPGA-based. The choice of this type of component is justified by the great opportunities it offers in terms of parallelism of treatments, mastery of available resources, and embeddability. The development of algorithms was done for the sake of minimizing the amount of the used calculations blocks. Moreover, many implementations have been performed on a Stratix II technology which has limited resources compared to those of the FPGAs available today on the market. This attests to the viability of the presented algorithms. The proposed algorithms thus operate simultaneous extraction of all radio channels when the stations can only occupy uniformly spaced locations like FM in Western Europe, and also for standards of which the distribution of stations in the spectrum seems rather random as the DRM30. Another part of the discussion focuses on the means of simultaneously demodulating it.
|
15 |
Effektivisera maskeringsprocessen inför pulverlackering genom automatisering : En fallstudie hos Alumbra AB / Optimizing the masking process prior to powder coating by using automation : A case study at Alumbra ABHalvarsson, Jonne, Ring, Wille January 2024 (has links)
Vid tillverkning av plåtdetaljer kräver ytbehandlingsmetoden pulverlackering att detaljen maskeras, något som är tidskrävande. I dagsläget sker detta med hjälp av silikonplugg som monteras för hand, vilket är en enformig arbetsuppgift med avseende på de stora produktionsvolymerna. Syftet med denna studie är att utforska lämpliga maskeringsmetoder för pulverlackering utöver den nuvarande samt möjligheterna till automatisering av maskeringsprocessen. Arbetet ska utgöra grunden för framtida implementationer vilket medför att lösningsförslag ska utvecklas i form av konceptuella CAD-modeller. Forskningsplanen byggde på designprocessen Design Research Methodology i tre steg med delmoment från designverktyget Getting Design Right invävt. Resultatet av studien blev att den nuvarande maskeringsmetoden lämpar sig bäst för automatisering. Det lämpligaste alternativet för automatisering var en robotbaserad lösning. Svaren på studiens frågeställningar användes för att utveckla ett koncept för en automatiserad maskeringsprocess med robotar som kan utgöra grunden för framtida implementationer. Dock är detta en första iteration och kan därför förbättras i många avseenden.
|
16 |
Kopieringsskydd : Hur skapar vi balans mellan användare och näringsliv? / Copyprotection : How do we create balance between users and the industry?Andersson, Andreas, Smedberg, Niclas January 2010 (has links)
Den här studien ämnar försöka reda ut delar av konflikten mellan näringsliv och konsument när det kommer till kopieringsskydd av datorspel. I dagens samhälle så finns det stor debatt om olaglig fildelning och dess betydelse för näringslivet. Vi har tänkt oss att det måste finnas en bättre lösning på problemet än att utveckla bättre och bättre kopieringsskydd så vi tänkte försöka hitta skydd som skapar en balans mellan fördelar och nackdelar för näringsliv och konsument så att båda sidor gynnas ungefär lika mycket. Detta gör vi genom ett antal intervjuer med näringslivet och sedan ett antal intervjuer med konsumenter, vi kommer sedan att dra fram fördelar och nackdelar för båda parter för varje kopieringsskydd som vi har valt ut att testa. Vår huvudfokus ligger på balans och vi har använt oss av Grundad Teori för att få fram vår teori utifrån de intervjuer vi har haft. Studien slutar sedan i en genomgång för resultatet vi har kommit fram till.Resultatet för studien är att vi drog vissa slutsatser om olika kopieringsskydd och vi kom även fram till att det bra balanserat kopieringsskydd bör ha samma mängd fördelar och nackdelar för både konsumenter och utvecklare samtidigt som fördelarna ska vara så många som möjligt. Ett av de skydd som vi tittat på och vi sett att det passar bra in på detta är de skydd som presenteras som en tjänst för användaren som Steam.
|
17 |
E-book Security: An Analysis of Current Protection SystemsQiang, Hao January 2003 (has links)
<p>E-books have a wide range of application spheres from rich-media presentations to web site archiving, from writing to financial statement. They make publishing, storing and distributing of information quite simple. As a new publication technique, the main concern with e-books is the copyright infringement. To prevent e-books from free duplication and distribution, different security mechanisms are used in their publishing and distributing processes. By investigating and analyzing Digital Rights Management (DRM) and Electronic Book Exchange (EBX), this thesis presents some security issues that the e-book industry are or should be aware. Various security problems and possible solutions are highlighted by means of two case studies.</p>
|
18 |
Digital Rights Management on an IP-based set-top box / Digital Rights Managemnet för en IP-baserad set-top boxHallbäck, Erik January 2005 (has links)
<p>Digital Rights Management (DRM) is a technology that allows service and content providers to distribute and sell digital content in a secure way. The content is encrypted and packaged with a license that is enforced before playback is allowed.</p><p>This thesis covers how a DRM system works and gives some cryptographic background. It also shows how Microsoft DRM for Network Devices can be implemented on an ip-based set-top box.</p>
|
19 |
DRM - utveckling, konflikter och framtid : konsumenters reaktioner på och företags användande av DRMLövgren, Alexander January 2013 (has links)
With the digital revolution within video games, the need for Digital Rights Management (DRM) has increased significantly, alongside with the increasing problem of copyright pirates. To counter pirates, DRM was created to prevent illegal copying of software, this to ensure that the Distributors received an income for their work. DRM has, since the start of its use, been getting, a lot of bad criticism from the users of the software protected by DRM. The main function of this paper is to describe the creation and development of DRM by analysis of the vision of different groups on this phenomenon. The main questions are as follows, is it possible to define the very reason for why DRM was created and if so, can its development through time be defined too? What differences in opinions are there when it comes to DRM, counting the two major groups of creators, sellers, distributors (referred to as distributors) versus individual users (referred to as consumers)? In what way will the research results suggest that the future DRM will develop? The development has gone from solving puzzles in a handbook to start the game each time the user wants to play, to serial numbers that is needed during the installation of the game. Even more extreme measures has been taken, consisting of the installation of an external software to verify the license key and ensure that no illegal actions were taken. Distributors have shown through the years that the use of DRM is a must to protect their games from piracy. With the years that gone by, the DRM-system has developed into a more advanced software protection system and with this more problems have begun to emerge affecting the legal consumers, like errors preventing the users from playing the game. At the same time Distributors show little interest to remove or lower the usage of DRM. Users believe that the removal of DRM is the perfect solution, but discard the fact that a software without any kind of copy protection would risk not to generate any income at all for the developers. When we consider now and then, we can see distinct patterns of continuing development of DRM-methods that do not create the same amount of issues for the consumers. The problem however previously addressed by DRM to stop illegal copies has now shifted to whether the consumers have the right to modify or change their purchased games.
|
20 |
Inhibitory Control as a Mediator of Individual Differences in Rates of False Memories in Children and AdultsAlberts, Joyce Wendy January 2010 (has links)
The primary aim of this dissertation is to address an important issue of individual susceptibility to false memories. Specifically, what is the role inhibitory control (IC) in children’s and adult’s propensity to producing false memories? Inhibitory control within the context of the current study is defined on the basis of performance on selective attention tasks. Inhibitory control is discussed within this dissertation as it is reflected in two selective attention tasks, Stroop and Negative Priming. While the false memory effect, as reflected in the Deese/Roediger and McDermott paradigm (Roediger & McDermott, 1995), is one of the most widely studied memory phenomenon, the current study is important as it provides some insights into the relation between attention and memory. An interesting finding in the DRM false memory effect is that participants often report having a clear false memory of having seen or heard the non-presented critical lure item (CL item). Such memory illusions have been informative on how memory works. The current study adds to this body of research by providing converging evidence of how individual differences in the sensitivity to the false memory effect may occur, and how this sensitivity may reflect the same IC mechanisms involved in selective attention tasks.
The basic notion examined within this dissertation is that when recognition memory is tested in the DRM paradigm, individuals have to select information that was studied and simultaneously inhibit highly activated yet non-presented information in memory, in order to correctly reject the CL item. If the notion that individual differences in sensitivity to the false memory effect is indeed related to a basic IC mechanism, then a relationship should be found between measures of IC in selective attention tasks and rates of false memories in the DRM test.
The current study incorporates three experiments. Experiments 1 and 2 are broken down into parts ‘a’ and ‘b’, with each part varying in respect to the IC measure. In part a, participants were assigned to an inhibitory control group (IC group) on the basis of Stroop interference. In part b, participants are assigned to IC groups on the basis of a combined measure of inhibitory control that is, Stroop and Negative Priming. The third experiment assigned participants on the basis of a combined measure of IC, and then considered the relation between the duration of IC over a number of DRM word-lists presented simultaneously prior to the recognition test. Experiment 3 also compared the robust effect of IC on the propensity to produce false memories across all three experiments.
The results of this study can be summarized as follows. In each experiment there was clear evidence of a relation between IC estimates and proportion of false memories. As predicted, individuals assigned to a Less IC group produced a higher proportion of false memories than those assigned to the More IC group. Inhibitory control differences did not modulate differences in correct or incorrect recognition in general (hits and false alarms to unrelated distractors). This second finding is important because it suggests a specific effect of IC in false memories, rather than a general breakdown in memory processes. The IC effect in false memories occurred in children (8-year olds and 10-year olds) as well as adults. Furthermore, the IC effect appeared to be additive with age; i.e., all groups produced a similar pattern across all three experiments. Last, the combined estimate of IC was found to be a more sensitive measure of false memories than a single index of IC; however, this was found in relation to adults but not for children.
A number of additional manipulations and measures of interest were also included. Experiment 2 found clear evidence of an effect of IC on remember responses, not only were Less IC individuals more likely to produce false alarms to critical lure items, they were also more likely to distinctly respond they “remembered” the CL item as opposed to only “knowing” the CL had been presented. Examination of reaction times (RTs) to false alarms as a function of IC group found the Less IC group were faster to make false alarm responses to CL items, whereas the More IC group were slower to make false responses CL items. As predicted the relation between IC and the false memory effect was modulated by the random versus blocked presentation manipulation in Experiment 3. Specifically, decreased rates of false memories were found in the random presentation format compared to the blocked format. Interestingly however, a small effect of IC group in false memories was found even in the random condition.
From this study it can be concluded that individual susceptibility to the false memory effect is in part modulated by inhibitory control. Individuals who demonstrate less effective IC show a greater propensity to false memories than those who demonstrate more effective IC. The IC effect of false memories was found to be robust, with converging evidence found across all three experiments. In relation to the development of inhibitory control, consistent with the research of Pritchard and Neumann (2004, 2009), and Lechuga and colleagues (2006), the results of this study suggest IC is fully developed in young children. However, their ability to accurately encode, retain and retrieve information would appear to develop at a different rate than IC. Specifically, it may be that while younger children are able to utilize IC in memory processes, they have yet to fully develop a richly interconnected semantic network. On the other hand, older children and adults would appear to have a more fully developed semantic network.
This series of experiments presents a novel demonstration of the relation between inhibitory control and false memories. As such, this study has the potential to provide new insight into a cognitive mechanism that may be responsible for both developmental trends and for individual differences in the regulation of false memories. Moreover, if the mechanism responsible for mediating false memories is causally linked to performance on selective attention tasks in the systematic way that is proposed, it may be possible in the future to utilize IC measures to assist in identifying individuals who have an exaggerated propensity to form false memories, as well as those more prone to resist them.
|
Page generated in 0.0763 seconds