481 |
La preuve par métadonnéesDicecca, Christopher 11 1900 (has links)
L’entrée en vigueur de la Loi concernant le cadre juridique des technologies de l’information (ci-après la Loi), est la concrétisation de la prise en compte par le droit, de la preuve technologique. La notion de document technologique est à la fois centrale dans la Loi et dans le Code civil du Québec. Il s’est parfaitement intégré aux divers moyens de preuve du Code civil.
Nous allons nous intéresser à cette notion qu’est le document technologique, mais davantage à ses éléments structurants, les métadonnées. Nous allons nous pencher sur la notion, ses origines et ses domaines de prédilection, faisant d’elles, un objet a priori essentiellement technologique, avant de les envisager dans un contexte de preuve.
Nous allons voir quel potentiel probatoire les métadonnées représentent, à l’appui d’un
document technologique. Enfin, nous nous interrogerons sur leur rôle probatoire autour des
notions de copie-transfert et des obligations posées par la Loi, afin que ces deux modes de
reproduction des document, puissent légalement tenir lieu du document original, soit la
certification et la documentation. / The entry into force of the Act to establish a legal framework for information technology
(hereafter «the Law») symbolises the embodiment of technological evidence into law. The
notion of technological document is central to this Law. It is perfectly integrated to the
different means of evidence in the Civil code. We will of course look at the notion of technological document, but even more so at its structuring element, metadata. We will study the notion, the origin and core areas of metadata.
Metadata, an essentially technological element, will be studied within the context of evidence law.
We will see what probationary potential metadata can offer in support of a technological
document. Finally, we will examine the role of metadata within the copy-transfer concept and
obligations imposed by the Law to legally be used as original document, certification and
documentation.
|
482 |
[en] A SOFTWARE ARCHITECTURE FOR AUTOMATED CATALOGUING OF GEOGRAPHIC DATA / [pt] UMA ARQUITETURA DE SOFTWARE PARA CATALOGAÇÃO AUTOMÁTICA DE DADOS GEOGRÁFICOSLUIZ ANDRE PORTES PAES LEME 12 September 2006 (has links)
[pt] Dados geográficos estão disponíveis em quantidade e
variedade
crescentes à medida que evoluem as tecnologias de
informática. Para torná-los
úteis, é necessário que mecanismos de busca de dados
possam identificar
dados apropriados a determinado propósito. Tais
mecanismos, comumente,
utilizam catálogos de metadados que descrevem cada dado
geográfico.
Entretanto, a geração de metadados é um processo que pode
consumir muito
tempo e estar sujeito a muitos erros, caso seja feito
manualmente. Essa
dissertação apresenta uma arquitetura de software e
tecnologias correlatas
para aplicações de catalogação automática de dados
geográficos. / [en] The amount and variety of geographic data increase as
technology
evolves. To make them useful it is necessary to implement
search engines
capable of identifying appropriate data. Such engines are
usually based on
metadata catalogs which describe the geographic data.
However, the metadata
generation process is time consuming and is not fail safe
if it is carried out
manually. This dissertation presents a software
architecture, and related
technologies, for the construction of automated
cataloguing applications of
geographic data.
|
483 |
CollaboraTVware: uma infra-estrutura ciente de contexto para suporte a participação colaborativa no cenário da TV Digital Interativa. / CollaboraTVware: a context-aware infrastructure with support for collaborative participation in an Interactive Digital TV environment.Alves, Luiz Gustavo Pacola 13 October 2008 (has links)
O advento da TV Digital Interativa no mundo modifica, em definitivo, a experiência do usuário em assistir a TV, tornando-a mais rica principalmente pelo uso do recurso da interatividade. Os usuários passam a ser pró-ativos e começam a interagir das mais diversas formas: construção de comunidades virtuais, discussão sobre um determinado conteúdo, envio de mensagens e recomendações, dentre outras. Neste cenário a participação dos usuários de forma colaborativa assume um papel importante e essencial. Aliado a isso, a recepção na TV Digital Interativa é feita através de dispositivos computacionais que, devido à convergência digital, estão presentes cada vez mais em meios ubíquos. Um outro fator preponderante a considerar, resultante desta mídia, corresponde ao crescimento da quantidade e diversidade de programas e serviços interativos disponíveis, dificultando, assim, a seleção de conteúdo de maior relevância. Diante dos fatos expostos, esta pesquisa tem como principal objetivo propor e implementar uma infra-estrutura de software no cenário da TV Digital Interativa intitulada CollaboraTVware para orientar, de forma transparente, os usuários na escolha de programas e serviços interativos através da participação colaborativa de outros usuários com perfis e contextos similares. No escopo deste trabalho, a participação colaborativa corresponde às avaliações atribuídas por usuários no sentido de expressar opiniões sobre os conteúdos veiculados. As modelagens de usuário, do dispositivo utilizado e do contexto da interação do usuário, essenciais para o desenvolvimento do CollaboraTVware, são representadas por padrões de metadados flexíveis usados no domínio da TV Digital Interativa (MPEG-7, MPEG-21 e TV-Anytime), e suas devidas extensões. A arquitetura do CollaboraTVware é composta por dois subsistemas: dispositivo do usuário e provedor de serviços. A tarefa de classificação, da teoria de mineração de dados, é a abordagem adotada na concepção da infra-estrutura. O conceito de perfil de uso participativo é apresentado e discutido. Para demonstrar e validar as funcionalidades do CollaboraTVware em um cenário de uso, foi desenvolvida uma aplicação (EPG colaborativo) como estudo de caso. / The advent of the Interactive Digital TV around the world transforms, ultimately, the user experience in watching TV, making it richer mainly by enabling user interactivity. The users become pro-active and begin to interact with very different ways: building virtual communities, discussion about contents, sending messages and recommendations etc. In this scenario the user participation in a collaborative assumes an important and essential role. Additionally, the reception in Interactive Digital TV is done by devices that due to digital convergence are increasingly present in ubiquitous environments. Another preponderant issue to consider, resulting from this media, is the growing of the number and diversity of programs and interactive services available, increasing the difficulty of selecting relevant content. Thus, the main objective of this work is to propose and implement a software infrastructure in an Interactive Digital Television environment entitled CollaboraTVware to guide in a transparent way, users in the choice of programs and interactive services through the collaborative participation of other users with similar profiles and contexts. In the scope of this work, the collaborative participation corresponds to the rating given by users in order to express opinions about the content transmitted. The modeling of user, device used and context of user interaction, essential for the development of CollaboraTVware, are represented by granular metadata standards used in the field of Interactive Digital TV (MPEG-7, MPEG-21 and TV-Anytime), and its extensions needed. The CollaboraTVware architecture is composed of two subsystems: user device and service provider. The classification task, from the theory of data mining, is the approach adopted in the infrastructure design. The concept of participative usage profile is presented and discussed. To demonstrate the functionalities in a use scenario, was developed an application (collaborative EPG) as a case study which uses the CollaboraTVware.
|
484 |
Uma arquitetura para mecanismos de buscas na web usando integração de esquemas e padrões de metadados heterogêneos de recursos educacionais abertos em repositórios dispersos / An architecture for web search engines using integration of heterogeneous metadata schemas and standards of open educational resources in scattered repositoriesGazzola, Murilo Gleyson 18 November 2015 (has links)
Recursos Educacionais Abertos (REA) podem ser definidos como materiais de ensino, aprendizagem e pesquisa, em qualquer meio de armazenamento, que estão amplamente disponíveis por meio de uma licença aberta que permite reuso, readequação e redistribuição sem restrições ou com restrições limitadas. Atualmente, diversas instituições de ensino e pesquisa têm investido em REA para ampliar o acesso ao conhecimento. Entretanto, os usuários ainda têm dificuldades de encontrar os REA com os mecanismos de busca atuais. Essa dificuldade deve-se principalmente ao fato dos mecanismos de busca na Web serem genéricos, pois buscam informação em qualquer lugar, desde páginas de vendas até materiais escritos por pessoas anônimas. De fato, esses mecanismos não levam em consideração as características intrínsecas de REA, como os diferentes padrões de metadados, repositórios e plataformas existentes, os tipos de licença, a granularidade e a qualidade dos recursos. Esta dissertação apresenta o desenvolvimento de um mecanismo de busca na Web especificamente para recuperação de REA denominado SeeOER. As principais contribuições desta pesquisa de mestrado consistem no desenvolvimento de um mecanismo de busca na Web por REA com diferenciais entre os quais se destacam a resolução de conflitos em nível de esquema oriundos da heterogeneidade dos REA, a busca em repositórios de REA, a consulta sobre a procedência de dados e o desenvolvimento de um crawler efetivo para obtenção de metadados específicos. Além disso, contribui na inclusão de busca de REA no cenário brasileiro, no mapeamento de padrões de metadados para mecanismos de busca na Web e a publicação de uma arquitetura de um mecanismo de busca na Web. Ademais, o SeeOER disponibiliza um serviço que traz um índice invertido de busca que auxilia encontrar REA nos repositórios dispersos na Web. Também foi disponibilizada uma API para buscas que possibilita consultas por palavras chaves e o uso de palavras booleanas. A forma de validação em mecanismos de busca na Web, como um todo, e de forma quantitativa e específica por componentes foi feita em grau de especialidade. Para validação de qualidade foram considerados 10 participantes com grupos distintos de escolaridade e área de estudo. Os resultados quantitativos demonstraram que o SeeOER é superior em 23.618 REA indexados em comparação a 15.955 do Jorum. Em relação à qualidade o SeeOER demonstrou ser superior ao Jorum considerando a função penalizada e o score utilizada nesta pesquisa. / Open Educational Resources (OER) has been increasingly applied to support students and professionals in their learning process. They consist of learning resources, usually stored in electronic device, associated with an open license that allows reuse, re-adaptation and redistribution with either no or limited restrictions. However, currently the Web search engines do not provide efficient mechanisms to find OER, in particular, because they do not consider the intrinsic characteristics of OER such as different standards of metadata, repositories and heterogeneous platforms, license types, granularity and quality of resources. This project proposes a Web search engine, named SeeOER, designed to recover OER. Main features of SeeOER are: schema-level con ict resolution derived from the heterogeneity of OER, search for Brazilian OER repositories, query considering data provenance and the development of an effective crawler to obtain specific metadata. In addition, our project contributes to the inclusion of the search OER research issues in the Brazilian scenario, to the mapping of metadata standards to Web search engine. In addition, SeeOER provides a service which internally has an inverted index search to find the OER which is different from traditional Web repositories. We also provide an API for queries which make it possible to write queries based on keywords and boolean. The validation of the search engine on the Web was both qualitative and quantitative. In the quantitative validation it was observed in level of specialty of the search engines components. In conclusion, the quality and quantitative results experiments showed that SeeOER is superior in OER indexed 23,618 compared to 15,955 the Jorum. In relation to the quality SeeOER shown to be superior to Jorum 27 points considering the metric used in project.
|
485 |
Describing data patternsVoß, Jakob 07 August 2013 (has links)
Diese Arbeit behandelt die Frage, wie Daten grundsätzlich strukturiert und beschrieben sind. Im Gegensatz zu vorhandenen Auseinandersetzungen mit Daten im Sinne von gespeicherten Beobachtungen oder Sachverhalten, werden Daten hierbei semiotisch als Zeichen aufgefasst. Diese Zeichen werden in Form von digitalen Dokumenten kommuniziert und sind mittels zahlreicher Standards, Formate, Sprachen, Kodierungen, Schemata, Techniken etc. strukturiert und beschrieben. Diese Vielfalt von Mitteln wird erstmals in ihrer Gesamtheit mit Hilfe der phenomenologischen Forschungsmethode analysiert. Ziel ist es dabei, durch eine genaue Erfahrung und Beschreibung von Mitteln zur Strukturierung und Beschreibung von Daten zum allgemeinen Wesen der Datenstrukturierung und -beschreibung vorzudringen. Die Ergebnisse dieser Arbeit bestehen aus drei Teilen. Erstens ergeben sich sechs Prototypen, die die beschriebenen Mittel nach ihrem Hauptanwendungszweck kategorisieren. Zweitens gibt es fünf Paradigmen, die das Verständnis und die Anwendung von Mitteln zur Strukturierung und Beschreibung von Daten grundlegend beeinflussen. Drittens legt diese Arbeit eine Mustersprache der Datenstrukturierung vor. In zwanzig Mustern werden typische Probleme und Lösungen dokumentiert, die bei der Strukturierung und Beschreibung von Daten unabhängig von konkreten Techniken immer wieder auftreten. Die Ergebnisse dieser Arbeit können dazu beitragen, das Verständnis von Daten --- das heisst digitalen Dokumente und ihre Metadaten in allen ihren Formen --- zu verbessern. Spezielle Anwendungsgebiete liegen unter Anderem in den Bereichen Datenarchäologie und Daten-Literacy. / Many methods, technologies, standards, and languages exist to structure and describe data. The aim of this thesis is to find common features in these methods to determine how data is actually structured and described. Existing studies are limited to notions of data as recorded observations and facts, or they require given structures to build on, such as the concept of a record or the concept of a schema. These presumed concepts have been deconstructed in this thesis from a semiotic point of view. This was done by analysing data as signs, communicated in form of digital documents. The study was conducted by a phenomenological research method. Conceptual properties of data structuring and description were first collected and experienced critically. Examples of such properties include encodings, identifiers, formats, schemas, and models. The analysis resulted in six prototypes to categorize data methods by their primary purpose. The study further revealed five basic paradigms that deeply shape how data is structured and described in practice. The third result consists of a pattern language of data structuring. The patterns show problems and solutions which occur over and over again in data, independent from particular technologies. Twenty general patterns were identified and described, each with its benefits, consequences, pitfalls, and relations to other patterns. The results can help to better understand data and its actual forms, both for consumption and creation of data. Particular domains of application include data archaeology and data literacy.
|
486 |
Metadatenbasierte Kontextualisierung architektonischer 3D-ModelleBlümel, Ina 18 December 2013 (has links)
Digitale 3D-Modelle der Architektur haben innerhalb der letzten fünf Jahrzehnte sowohl die analogen, auf Papier basierenden Zeichnungen als auch die physischen Modelle aus ihrer planungs-, ausführungs- und dokumentationsunterstützenden Rolle verdrängt. Als Herausforderungen bei der Integration von 3D-Modellen in digitale Bibliotheken und Archive sind zunächst die meist nur rudimentäre Annotation mit Metadaten seitens der Autoren und die nur implizit in den Modellen vorhandenen Informationen zu nennen. Aus diesen Defiziten resultiert ein aktuell starkes Interesse an inhaltsbasierter Erschließung durch vernetzte Nutzergruppen oder durch automatisierte Verfahren, die z.B. aufgrund von Form- oder Strukturmerkmalen eine automatische Kategorisierung von 3D-Modellen anhand gegebener Schemata ermöglichen. Die teilweise automatische Erkennung von objektinhärenter Semantik vergrößert die Menge an diskreten und semantisch unterscheidbaren Einheiten. 3D-Modelle als Content im World Wide Web können sowohl untereinander als auch mit anderen textuellen wie nichttextuellen Objekten verknüpft werden, also Teil von aggregierten Dokumenten sein. Die Aggregationen bzw. der Modellkontext sowie die inhärenten Entitäten erfordern Instrumente der Organisation, um dem Benutzer bei der Suche nach Informationen einen Mehrwert zu bieten, insbesondere dann, wenn textbasiert nach Informationen zum Modell und zu dessen Kontext gesucht wird. In der vorliegenden Arbeit wird ein Metadatenmodell zur gezielten Strukturierung von Information entwickelt, welche aus 3D-Architekturmodellen gewonnen wird. Mittels dieser Strukturierung kann das Modell mit weiterer Information vernetzt werden. Die Anwendung etablierter Ontologien sowie der Einsatz von URIs machen die Informationen nicht nur explizit, sondern beinhalten auch eine semantische Information über die Relation selbst, sodass eine Interoperabilität zu anderen verfügbaren Daten im Sinne der Grundprinzipien des Linked-Data-Ansatzes gewährleistet wird. / Digital 3D models from the domain of architecture have replaced analogue paper-based drawings as well as haptic scale models bit by bit during the last five decades. The main challenges for integrating 3D models in digital libraries and archives are posed by mostly only sparse annotation with metadata provided by the author and the fact that information is only implicitly available. This has recently led to an increased interest in context-based indexing using automatic approaches as well as social tagging. Computer based approaches usually rely on methods from artificial intelligence including machine learning for automated categorization based on geometric and structural properties according to a given classification scheme. The partially automated recognition of model-inherent semantics increases the number of discrete and semantically distinguishable entities. 3D models as parts of the World Wide Web can be interlinked which each other. Aggregations as well as the model context along with inherent entities require efficient tools for organization in order to provide real additional benefits for the user during its quest for information. Especially for text-based search on information about a 3D model and its context, a metadata model is an indispensable tool regarding the above described challenges. In this work we develop a metadata model for specific structuring of information, which is obtained from 3D architectural models. Using this structure, the model can be linked to further information. The application of established ontologies and the use of URIs make the information not only explicitly, but also provide semantic information about the relation itself. By that, interoperability according to the principles of the LOD approach is guaranteed.
|
487 |
L’évolution des systèmes et architectures d’information sous l’influence des données massives : les lacs de données / The information architecture evolution under the big data influence : the data lakesMadera, Cedrine 22 November 2018 (has links)
La valorisation du patrimoine des données des organisation est mise au cœur de leur transformation digitale. Sous l’influence des données massives le système d’information doit s’adapter et évoluer. Cette évolution passe par une transformation des systèmes décisionnels mais aussi par l’apparition d’un nouveau composant du système d’information : Les lacs de données. Nous étudions cette évolution des systèmes décisionnels, les éléments clés qui l’influence mais aussi les limites qui apparaissent , du point de vue de l’architecture, sous l’influence des données massives. Nous proposons une évolution des systèmes d’information avec un nouveau composant qu’est le lac de données. Nous l’étudions du point de vue de l’architecture et cherchons les facteurs qui peuvent influencer sa conception , comme la gravité des données. Enfin, nous amorçons une piste de conceptualisation des lacs de données en explorant l’approche ligne de produit.Nouvelle versionSous l'influence des données massives nous étudions l'impact que cela entraîne notamment avec l'apparition de nouvelles technologies comme Apache Hadoop ainsi que les limite actuelles des système décisionnel.Les limites rencontrées par les systèmes décisionnels actuels impose une évolution au système d 'information qui doit s'adapter et qui donne naissance à un nouveau composant : le lac de données.Dans un deuxième temps nous étudions en détail ce nouveau composant, formalisons notre définition, donnons notre point de vue sur son positionnement dans le système d information ainsi que vis à vis des systèmes décisionnels.Par ailleurs, nous mettons en évidence un facteur influençant l’architecture des lacs de données : la gravité des données, en dressant une analogie avec la loi de la gravité et en nous concentrant sur les facteurs qui peuvent influencer la relation donnée-traitement.Nous mettons en évidence , au travers d'un cas d'usage , que la prise en compte de la gravité des données peut influencer la conception d'un lac de données.Nous terminons ces travaux par une adaptation de l'approche ligne de produit logiciel pour amorcer une méthode de formalisations et modélisation des lacs de données. Cette méthode nous permet :- d’établir une liste de composants minimum à mettre en place pour faire fonctionner un lac de données sans que ce dernier soit transformé en marécage,- d’évaluer la maturité d'un lac de donnée existant,- de diagnostiquer rapidement les composants manquants d'un lac de données existant qui serait devenu un marécage,- de conceptualiser la création des lacs de données en étant "logiciel agnostique”. / Data is on the heart of the digital transformation.The consequence is anacceleration of the information system evolution , which must adapt. The Big data phenomenonplays the role of catalyst of this evolution.Under its influence appears a new component of the information system: the data lake.Far from replacing the decision support systems that make up the information system, data lakes comecomplete information systems’s architecture.First, we focus on the factors that influence the evolution of information systemssuch as new software and middleware, new infrastructure technologies, but also the decision support system usage itself.Under the big data influence we study the impact that this entails especially with the appearance ofnew technologies such as Apache Hadoop as well as the current limits of the decision support system .The limits encountered by the current decision support system force a change to the information system which mustadapt and that gives birth to a new component: the data lake.In a second time we study in detail this new component, formalize our definition, giveour point of view on its positioning in the information system as well as with regard to the decision support system .In addition, we highlight a factor influencing the architecture of data lakes: data gravity, doing an analogy with the law of gravity and focusing on the factors that mayinfluence the data-processing relationship. We highlight, through a use case, that takingaccount of the data gravity can influence the design of a data lake.We complete this work by adapting the software product line approach to boot a methodof formalizations and modeling of data lakes. This method allows us:- to establish a minimum list of components to be put in place to operate a data lake without transforming it into a data swamp,- to evaluate the maturity of an existing data lake,- to quickly diagnose the missing components of an existing data lake that would have become a dataswamp- to conceptualize the creation of data lakes by being "software agnostic “.
|
488 |
Convergência digital de objetos de aprendizagem Scorm /Rodolpho, Everaldo Rodrigo. January 2009 (has links)
Orientador: Hilda Carvalho de Oliveira / Banca: Eugenio Maria de França Ramos / Banca: Klaus Schlünzen Junior / Resumo: A construção de Objetos de Aprendizagem (OAs) é um importante processo de Educação a Distância. Padrões têm sido definidos com estruturas de metadados para favorecer a reutilização e portabilidade dos OAs, como SCORM, LOM, ARIADNE, entre outros. Mesmo assim, a portabilidade entre diferentes sistemas de e-Learning requerem conhecimentos específicos. A dificuldade aumenta quando se direciona a diferentes meios digitais e de comunicação, como ambientes da Web e da TV Digital Aberta (TVDA) - um meio alternativo de acesso à Educação que vem sendo integrado à vida dos brasileiros. Nesse contexto, o principal objetivo deste trabalho foi a investigação de um novo modelo, OAX, para implementação de OAs com portabilidade para ambientes Web e para a TVDA. O modelo, baseado em metadados e codificação Base64, foi definido com base na estrutura SCORM. Para a criação e gerenciamento dos OAs, segundo o modelo OAX, foi proposta a arquitetura de um sistema de autoria, SOAX - uma aplicação Web, composta por quatro componentes, visando: encapsulamento do átomo de conteúdo OAX, armazenamento do conteúdo, aplicativos de gerenciamento/visualização de conteúdo e APIs (Application Programming Interface) de importação e exportação para padrões de OAs. O sistema SOAX foi projetado com a finalidade de atender educadores com conhecimentos básicos de Informática, de forma que pudessem construir os OAs preocupados apenas com os aspectos didático-pedagógicos. O sistema converte automaticamente os OAs para os formatos de padrões de OAs e para ambientes da TVDA. Está disponível uma versão beta do SOAX. / Abstract: The construction of Learning Objects (LOs) is an important process for distance education. Standards have been defined with metadata structures to enhance the reutilizability and portability of LOs, such as SCORM, LOM, ARIADNE, among others. The portability between different systems of e-Learning requires expert knowledge. The difficulty increases when different digital media and communication environments are used, for example: Web and Open Digital TV (ODTV) - an alternative means of access to education that is being integrated into daily life of Brazilians. In this context, the main goal of this work was to investigate a new model for implementation of LOs (OAX) with portability to Web and ODTV environments. The model was defined based on the SCORM standard and was based on metadata and base64 encoding. The architecture of an authoring system (SOAX) was proposed for the creation and management of LOs, according to the OAX model. SOAX is a Web application and is composed of four components for: encapsulation of the OAX content atom, content storage, applications of viewing/management of content and APIs (Application Programming Interface) for import and export LOs for standards formats. The SOAX system was designed for educators with basic knowledge of computer. So they could concentrate efforts on didactic-pedagogic aspects of the LOs. The system automatically converts the LOs for the formats of the LOs standards and ODTV environments. A beta version of SOAX is available. / Mestre
|
489 |
PersonalTVware: uma infraestrutura de suporte a sistemas de recomendação sensíveis ao contexto para TV Digital Personalizada. / PersonalTVware: an infrastructure to support the context-aware recommender systems for Personalized Digital TV.Fábio Santos da Silva 18 March 2011 (has links)
O processo de digitalização da TV em diversos países do mundo tem contribuído para o aumento do volume de programas de TV, o que gera uma sobrecarga de informação. Consequentemente, o usuário está enfrentando dificuldade para encontrar os programas de TV favoritos dentre as várias opções disponíveis. Diante deste cenário, os sistemas de recomendação destacam-se como uma possível solução. Tais sistemas são capazes de filtrar itens relevantes de acordo com as preferências do usuário ou de um grupo de usuários que possuem perfis similares. Entretanto, em diversas recomendações o interesse do usuário pode depender do seu contexto. Assim, torna-se importante estender as abordagens tradicionais de recomendação personalizada por meio da exploração do contexto do usuário, o que poderá melhorar a qualidade das recomendações. Para isso, este trabalho descreve uma infraestrutura de software de suporte ao desenvolvimento e execução de sistemas de recomendação sensíveis ao contexto para TV Digital Interativa - intitulada de PersonalTVware. A solução proposta fornece componentes que implementam técnicas avançadas para recomendação de conteúdo e processamento de contexto. Com isso, os desenvolvedores de sistemas de recomendação concentram esforços na lógica de apresentação de seus sistemas, deixando questões de baixo nível para o PersonalTVware gerenciar. As modelagens de usuário, e do contexto, essenciais para o desenvolvimento do PersonalTVware, são representadas por padrões de metadados flexíveis usados na TV Digital Interativa (MPEG-7 e TV-Anytime), e suas devidas extensões. A arquitetura do PersonalTVware é composta por dois subsistemas: dispositivo do usuário e provedor de serviços. A tarefa de predição de preferências contextuais é baseada em métodos de aprendizagem de máquina, e a filtragem de informação sensível ao contexto tem como base a técnica de filtragem baseada em conteúdo. O conceito de perfil contextual também é apresentado e discutido. Para demonstrar e validar as funcionalidades do PersonalTVware em um cenário de uso, foi desenvolvido um sistema de recomendação sensível ao contexto como estudo de caso. / The process of digitalization of TV in several countries around the world has, contributed to increase the volume of TV programs offered and it leads, to information overload problem. Consequently, the user facing the difficulty to find their favorite TV programs in view of various available options. Within this scenario, the recommender systems stand out as a possible solution. These systems are capable of filtering relevant items according to the user preferences or the group of users who have similar profiles. However, the most of the recommender systems for Interactive Digital TV has rarely take into consideration the users contextual information in carrying out the recommendation. However, in many recommendations the user interest may depend on the context. Thus, it becomes important to extend the traditional approaches to personalized recommendation of TV programs by exploiting the context of user, which may improve the quality of the recommendations. Therefore, this work presents a software infrastructure in an Interactive Digital TV environment to support context-aware personalized recommendation of TV programs entitled PersonalTVware. The proposed solution provides components which implement advanced techniques to recommendation of content and context management. Thus, developers of recommender systems can concentrate efforts on the presentation logic of their systems, leaving low-level questions for the PersonalTVware managing. The modeling of user and context, essential for the development of PersonalTVware, are represented by granular metadata standards used in the Interactive Digital TV field (MPEG-7 and TV-Anytime), and its extensions required. The PersonalTVware architecture is composed by two subsystems: the users device and the service provider. The task of inferring contextual preferences is based on machine learning methods, and context-aware information filtering is based on content-based filtering technique. The concept of contextual user profile is presented and discussed. To demonstrate the functionalities in a usage scenario a context-aware recommender system was developed as a case study applying the PersonalTVware.
|
490 |
Mapeamento de processos baseado em controles para governan?a de tecnologia da informa??o / Process mapping based in controls for governance of technology informationsPoncinelli Filho, Carlos Alberto 27 April 2007 (has links)
Made available in DSpace on 2016-04-04T18:31:18Z (GMT). No. of bitstreams: 1
Poncinelli_Final_revisada22052007-3.pdf: 2496152 bytes, checksum: 2e7cfb0d28420c6b16ef51f9ebff00d5 (MD5)
Previous issue date: 2007-04-27 / Research for the development of a process-mapping model based on control patterns to manage information technology. There is a trend for organizations to try and have their management processes under control; however, choosing, implementing and using the models which would best adapt to their reality are not easy tasks. This paper proposes the development of an information technology Management process using the systems study method named Sistemografia in order to meet the demands of the Sarbanes-Oxley law. This process is based on a model of control by objective, named Control Objectives for Information and related Technology (COBIT). All of these factors are embedded in companies that manage their technologies, especially telecommunications companies related to tele-information technology and to the multi-service nets. In short, the objectives are: to describe the elements that compose Company Management and Architecture; to describe the articles of the Sarbanes-Oxley law, relating them to the COBIT model; to describe the management model using the sistemografia method; and to propose a framework for data management and metadata for the Management of IT, associating a dashboard, and based on the maturity of the processes, so as to measure them at their present stage and plan the aimed strategic stage. / Pesquisa para desenvolvimento de um modelo para mapeamento de processos baseado em controles para a governan?a de tecnologia da informa??o. As organiza??es t?m procurado cada vez mais, ter sob controle seus processos de governan?a, por?m, escolher, adotar e utilizar modelos que melhor se adaptem ?s sua realidade n?o ? tarefa f?cil. Este trabalho prop?e o desenvolvimento de um processo de Governan?a de Tecnologia da Informa??o utilizando o m?todo de estudo de sistemas, denominado Sistemografia para atender as determina??es da Lei Sarbanes-Oxley. Tal processo est? baseado em modelo de controles por objetivos, denominado Control Objectives for Information and related Technology (COBIT). Todos esses fatores est?o intrinsecamente presentes nas empresas que fa?am gest?o de suas tecnologias, principalmente empresas de telecomunica??es que est?o relacionadas com a teleinform?tica e as redes multi-servi?o. De forma espec?fica os objetivos s?o: descrever os elementos que comp?em as Governan?a e a Arquitetura Empresariais; descrever as Se??es da Lei Sarbanes- Oxley, relacionando-as com o modelo COBIT; descrever o processo para gest?o utilizando o m?todo de sistemografia; e propor uma moldura (framework) para a gest?o dos dados e metadados para Governan?a de TI, associando um painel de controle (dashboard), baseado na maturidade dos processos, de maneira a medilos em seu estado atual e projetar o est?gio estrat?gico desejado.
|
Page generated in 0.0609 seconds