231 |
Employing linked data and dialogue for modelling cultural awareness of a userDenaux, R., Dimitrova, V., Lau, L., Brna, P., Thakker, Dhaval, Steiner, C. January 2014 (has links)
Yes / Intercultural competence is an essential 21st Century skill. A key issue for developers of cross-cultural training simulators is the need to provide relevant learning experience adapted to the learner’s abilities. This paper presents a dialogic approach for a quick assessment of the depth of a learner's current intercultural awareness as part of the EU ImREAL project. To support the dialogue, Linked Data is seen as a rich knowledge base for a diverse range of resources on cultural aspects. This paper investigates how semantic technologies could be used to: (a) extract a pool of concrete culturally-relevant facts from DBpedia that can be linked to various cultural groups and to the learner, (b) model a learner's knowledge on a selected set of cultural themes and (c) provide a novel, adaptive and user-friendly, user modelling dialogue for cultural awareness. The usability and usefulness of the approach is evaluated by CrowdFlower and Expert Inspection.
|
232 |
Using Basic Level Concepts in a Linked Data Graph to Detect User's Domain FamiliarityAl-Tawil, M., Dimitrova, V., Thakker, Dhaval January 2015 (has links)
No / We investigate how to provide personalized nudges to aid a user’s
exploration of linked data in a way leading to expanding her domain
knowledge. This requires a model of the user’s familiarity with domain
concepts. The paper examines an approach to detect user domain familiarity by
exploiting anchoring concepts which provide a backbone for probing
interactions over the linked data graph. Basic level concepts studied in
Cognitive Science are adopted. A user study examines how such concepts can
be utilized to deal with the cold start user modelling problem, which informs a
probing algorithm.
|
233 |
Flexible data architecture for enabling AI applications in production environmentsGrützman, Jossy Milagros, Rudolph, Franziska, Boesler, Martin, Wenzel, Ken 04 November 2024 (has links)
We present a flexible data architecture that combines RDF knowledge
graphs for semantic descriptions with tailored data formats and APIs for time
series data to create digital representations of production systems with related
products, processes, and machines. We show that this data architecture can provide
a base for different machine learning and analytics use cases while also allowing
the integration of existing standards like OPC UA.
|
234 |
Quantitative Risk Management and Pricing for Equity Based Insurance GuaranteesLeboho, Nakedi Wilson 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015 / ENGLISH ABSTRACT : Equity-based insurance guarantees also known as unit-linked annuities are annuities with embedded exotic, long-term and path-dependent options which can be categorised into variable and equity indexed annuities, whereby investors participate in the security markets through insurance companies that guarantee them a minimum of their invested premiums. The difference between
the financial options and options embedded in equity-based policies is that financial ones are financed by the option buyers’ premiums, whereas options of the equity-based policies are financed by also continuous fees that follow
the premium paid first by the policyholders during the life of the contracts. Other important dissimilarities are that equity-based policies do not give the owner the right to sell the contract, and carry not just security market related
risk, but also insurance related risks such as the selection rate, behavioural, mortality, others and the systematic longevity. Thus equity-based annuities are much complicated insurance products to precisely value and hedge. For insurance companies to successfully fulfil their promise of eventually returning at least initially invested amount to the policyholders, they have to be able to
measure and manage risks within the equity-based policies. So in this thesis, we do fair pricing of the variable and equity indexed annuities, then discuss management of financial market and insurance risks management. / AFRIKAANSE OPSOMMING : Aandeel-gebaseerde versekering waarborg ook bekend as eenheid-gekoppelde annuiteite is eksotiese, langtermyn-en pad-afhanklike opsies wat in veranderlike en gelykheid geindekseer annuiteite, waardeur beleggers neem in die sekuriteit markte deur middel van versekering maatskappye wat waarborg hulle ’n minimum
van geklassifiseer kan word hulle belˆe premies. Die verskil tussen die finansi¨ele opsies en opsies is ingesluit in aandele-gebaseerde beleid is dat die finansi¨ele mense is gefinansier deur die opsie kopers se premies, terwyl opsies van die aandele-gebaseerde beleid word deur ook deurlopende fooie wat volg op die premie wat betaal word eers deur die polishouers gefinansier gedurende die lewe van die kontrakte. Ander belangrike verskille is dat aandele-gebaseerde beleid gee nie die eienaar die reg om die kontrak te verkoop, en dra nie net markverwante risiko sekuriteit, maar ook versekering risiko’s, soos die seleksie
koers, gedrags, sterftes, ander en die sistematiese langslewendheid. So aandeel-gebaseerde annuiteite baie ingewikkeld versekering produkte om presies waarde en heining. Vir versekeringsmaatskappye suksesvol te vervul hul
belofte van uiteindelik ten minste aanvanklik belˆe bedrag terug te keer na die polishouers, hulle moet in staat wees om te meet en te bestuur risiko’s binne die aandeel-gebaseerde beleid. So in hierdie tesis, ons doen billike pryse van die veranderlike en gelykheid geïndekseer annuiteite, bespreek dan die bestuur van finansiele markte en versekering risiko’s bestuur.
|
235 |
Selective disclosure and inference leakage problem in the Linked Data / Exposition sélective et problème de fuite d’inférence dans le Linked DataSayah, Tarek 08 September 2016 (has links)
L'émergence du Web sémantique a mené à une adoption rapide du format RDF (Resource Description Framework) pour décrire les données et les liens entre elles. Ce modèle de graphe est adapté à la représentation des liens sémantiques entre les objets du Web qui sont identifiés par des IRI. Les applications qui publient et échangent des données RDF potentiellement sensibles augmentent dans de nombreux domaines : bio-informatique, e-gouvernement, mouvements open-data. La problématique du contrôle des accès aux contenus RDF et de l'exposition sélective de l'information en fonction des privilèges des requérants devient de plus en plus importante. Notre principal objectif est d'encourager les entreprises et les organisations à publier leurs données RDF dans l'espace global des données liées. En effet, les données publiées peuvent être sensibles, et par conséquent, les fournisseurs de données peuvent être réticents à publier leurs informations, à moins qu'ils ne soient certains que les droits d'accès à leurs données par les différents requérants sont appliqués correctement. D'où l'importance de la sécurisation des contenus RDF est de l'exposition sélective de l'information pour différentes classes d'utilisateurs. Dans cette thèse, nous nous sommes intéressés à la conception d'un contrôle d'accès pertinents pour les données RDF. De nouvelles problématiques sont posées par l'introduction des mécanismes de déduction pour les données RDF (e.g., RDF/S, OWL), notamment le problème de fuite d'inférence. En effet, quand un propriétaire souhaite interdire l'accès à une information, il faut également qu'il soit sûr que les données diffusées ne pourront pas permettre de déduire des informations secrètes par l'intermédiaire des mécanismes d'inférence sur des données RDF. Dans cette thèse, nous proposons un modèle de contrôle d'accès à grains fins pour les données RDF. Nous illustrons l'expressivité du modèle de contrôle d'accès avec plusieurs stratégies de résolution de conflits, y compris la Most Specific Takes Precedence. Nous proposons un algorithme de vérification statique et nous montrons qu'il est possible de vérifier à l'avance si une politique présente un problème de fuite d'inférence. De plus, nous montrons comment utiliser la réponse de l'algorithme à des fins de diagnostics. Pour traiter les privilèges des sujets, nous définissons la syntaxe et la sémantique d'un langage inspiré de XACML, basé sur les attributs des sujets pour permettre la définition de politiques de contrôle d'accès beaucoup plus fines. Enfin, nous proposons une approche d'annotation de données pour appliquer notre modèle de contrôle d'accès, et nous montrons que notre implémentation entraîne un surcoût raisonnable durant l'exécution / The emergence of the Semantic Web has led to a rapid adoption of the RDF (Resource Description Framework) to describe the data and the links between them. The RDF graph model is tailored for the representation of semantic relations between Web objects that are identified by IRIs (Internationalized Resource Identifier). The applications that publish and exchange potentially sensitive RDF data are increasing in many areas: bioinformatics, e-government, open data movement. The problem of controlling access to RDF content and selective exposure to information based on privileges of the requester becomes increasingly important. Our main objective is to encourage businesses and organizations worldwide to publish their RDF data into the linked data global space. Indeed, the published data may be sensitive, and consequently, data providers may avoid to release their information, unless they are certain that the desired access rights of different accessing entities are enforced properly, to their data. Hence the issue of securing RDF content and ensuring the selective disclosure of information to different classes of users is becoming all the more important. In this thesis, we focused on the design of a relevant access control for RDF data. The problem of providing access controls to RDF data has attracted considerable attention of both the security and the database community in recent years. New issues are raised by the introduction of the deduction mechanisms for RDF data (e.g., RDF/S, OWL), including the inference leakage problem. Indeed, when an owner wishes to prohibit access to information, she/he must also ensure that the information supposed secret, can’t be inferred through inference mechanisms on RDF data. In this PhD thesis we propose a fine-grained access control model for RDF data. We illustrate the expressiveness of the access control model with several conict resolution strategies including most specific takes precedence. To tackle the inference leakage problem, we propose a static verification algorithm and show that it is possible to check in advance whether such a problem will arise. Moreover, we show how to use the answer of the algorithm for diagnosis purposes. To handle the subjects' privileges, we define the syntax and semantics of a XACML inspired language based on the subjects' attributes to allow much finer access control policies. Finally, we propose a data-annotation approach to enforce our access control model, and show that our solution incurs reasonable overhead with respect to the optimal solution which consists in materializing the user's accessible subgraph to enforce our access control model, and show that our solution incurs reasonable overhead with respect to the optimal solution which consists in materializing the user's accessible subgraph
|
236 |
Data Fusion in Spatial Data InfrastructuresWiemann, Stefan 12 January 2017 (has links) (PDF)
Over the past decade, the public awareness and availability as well as methods for the creation and use of spatial data on the Web have steadily increased. Besides the establishment of governmental Spatial Data Infrastructures (SDIs), numerous volunteered and commercial initiatives had a major impact on that development. Nevertheless, data isolation still poses a major challenge. Whereas the majority of approaches focuses on data provision, means to dynamically link and combine spatial data from distributed, often heterogeneous data sources in an ad hoc manner are still very limited. However, such capabilities are essential to support and enhance information retrieval for comprehensive spatial decision making.
To facilitate spatial data fusion in current SDIs, this thesis has two main objectives. First, it focuses on the conceptualization of a service-based fusion process to functionally extend current SDI and to allow for the combination of spatial data from different spatial data services. It mainly addresses the decomposition of the fusion process into well-defined and reusable functional building blocks and their implementation as services, which can be used to dynamically compose meaningful application-specific processing workflows. Moreover, geoprocessing patterns, i.e. service chains that are commonly used to solve certain fusion subtasks, are designed to simplify and automate workflow composition.
Second, the thesis deals with the determination, description and exploitation of spatial data relations, which play a decisive role for spatial data fusion. The approach adopted is based on the Linked Data paradigm and therefore bridges SDI and Semantic Web developments. Whereas the original spatial data remains within SDI structures, relations between those sources can be used to infer spatial information by means of Semantic Web standards and software tools.
A number of use cases were developed, implemented and evaluated to underpin the proposed concepts. Particular emphasis was put on the use of established open standards to realize an interoperable, transparent and extensible spatial data fusion process and to support the formalized description of spatial data relations. The developed software, which is based on a modular architecture, is available online as open source. It allows for the development and seamless integration of new functionality as well as the use of external data and processing services during workflow composition on the Web. / Die Entwicklung des Internet im Laufe des letzten Jahrzehnts hat die Verfügbarkeit und öffentliche Wahrnehmung von Geodaten, sowie Möglichkeiten zu deren Erfassung und Nutzung, wesentlich verbessert. Dies liegt sowohl an der Etablierung amtlicher Geodateninfrastrukturen (GDI), als auch an der steigenden Anzahl Communitybasierter und kommerzieller Angebote. Da der Fokus zumeist auf der Bereitstellung von Geodaten liegt, gibt es jedoch kaum Möglichkeiten die Menge an, über das Internet verteilten, Datensätzen ad hoc zu verlinken und zusammenzuführen, was mitunter zur Isolation von Geodatenbeständen führt. Möglichkeiten zu deren Fusion sind allerdings essentiell, um Informationen zur Entscheidungsunterstützung in Bezug auf raum-zeitliche Fragestellungen zu extrahieren.
Um eine ad hoc Fusion von Geodaten im Internet zu ermöglichen, behandelt diese Arbeit zwei Themenschwerpunkte. Zunächst wird eine dienstebasierten Umsetzung des Fusionsprozesses konzipiert, um bestehende GDI funktional zu erweitern. Dafür werden wohldefinierte, wiederverwendbare Funktionsblöcke beschrieben und über standardisierte Diensteschnittstellen bereitgestellt. Dies ermöglicht eine dynamische Komposition anwendungsbezogener Fusionsprozesse über das Internet. Des weiteren werden Geoprozessierungspatterns definiert, um populäre und häufig eingesetzte Diensteketten zur Bewältigung bestimmter Teilaufgaben der Geodatenfusion zu beschreiben und die Komposition und Automatisierung von Fusionsprozessen zu vereinfachen.
Als zweiten Schwerpunkt beschäftigt sich die Arbeit mit der Frage, wie Relationen zwischen Geodatenbeständen im Internet erstellt, beschrieben und genutzt werden können. Der gewählte Ansatz basiert auf Linked Data Prinzipien und schlägt eine Brücke zwischen diensteorientierten GDI und dem Semantic Web. Während somit Geodaten in bestehenden GDI verbleiben, können Werkzeuge und Standards des Semantic Web genutzt werden, um Informationen aus den ermittelten Geodatenrelationen abzuleiten.
Zur Überprüfung der entwickelten Konzepte wurde eine Reihe von Anwendungsfällen konzipiert und mit Hilfe einer prototypischen Implementierung umgesetzt und anschließend evaluiert. Der Schwerpunkt lag dabei auf einer interoperablen, transparenten und erweiterbaren Umsetzung dienstebasierter Fusionsprozesse, sowie einer formalisierten Beschreibung von Datenrelationen, unter Nutzung offener und etablierter Standards. Die Software folgt einer modularen Struktur und ist als Open Source frei verfügbar. Sie erlaubt sowohl die Entwicklung neuer Funktionalität durch Entwickler als auch die Einbindung existierender Daten- und Prozessierungsdienste während der Komposition eines Fusionsprozesses.
|
237 |
Exploration of mutations in erythroid 5-aminolevulinate synthase that lead to increased porphyrin synthesisFratz, Erica Jean 20 March 2014 (has links)
5-Aminolevulinate synthase (ALAS; EC 2.3.1.37) is a pyridoxal 5'-phosphate (PLP)-dependent enzyme that catalyzes the first committed step of heme biosynthesis in animals, the condensation of glycine and succinyl-CoA yielding 5-aminolevuliante (ALA), CoA, and CO2. Murine erythroid-specific ALAS (mALAS2) variants that cause high levels of PPIX accumulation provide a new means of targeted, and potentially enhanced, photosensitization. Transfection of HeLa cells with expression plasmids for mALAS2 variants, specifically for those with mutated mitochondrial presequences and a mutation in the active site loop, caused significant cellular accumulation of PPIX, particularly in the membrane. Light treatment of HeLa cells expressing mALAS2 variants revealed that mALAS2 expression results in an increase in cell death in comparison to aminolevulinic acid (ALA) treatment producing a similar amount of PPIX. Generation of PPIX is a crucial component in the widely used photodynamic therapies (PDT) of cancer and other dysplasias. The delivery of stable and highly active mALAS2 variants has the potential to expand and improve upon current PDT regimes.
Mutations in the C-terminus of human ALAS2 (hALAS2) can increase hALAS2 activity and are associated with X-linked erythropoietic protoporphyria (XLEPP), a disease phenotypically characterized by elevated levels or PPIX and zinc protoporphyrin in erythroblasts. This is apparently due to enhanced cellular hALAS2 activity, but the biochemical relationship between these C-terminal mutations and increased hALAS2 activity is not well understood. HALAS2 and three XLEPP variants were studied both in vitro to compare kinetic and structural parameters and ex vivo in HeLa and K562 cells. Two XLEPP variants, delAGTG, and Q548X, exhibited higher catalytic rates and affinity for succinyl-CoA than wild-type hALAS2, had increased transition temperatures, and caused porphyrin accumulation in HeLa and K562 cells. Another XLEPP mutation, delAT, had an increased transition temperature and caused porphyrin accumulation in mammalian cells, but exhibited a reduced catalytic rate at 37[deg]C in comparison to wild-type hALAS2. The XLEPP variants, unlike wild-type hALAS2, were more structurally responsive upon binding of succinyl-CoA, and adopted distinct features in tertiary and PLP cofactor-binding site. These results imply that the C-terminus of hALAS2 is important for regulating its structural integrity, which affects kinetic activity and stability.
XLEPP has only recently been identified as a blood disorder, and thus there are no specific treatments. One potential treatment involves the use of the antibiotic isonicotinic acid hydrazide (isoniazid, INH), commonly used to treat tuberculosis. INH can cause sideroblastic anemia as a side-effect and has traditionally been thought to do so by limiting PLP availability to hALAS2 via direct inhibition of pyridoxal kinase, and reacting with pyridoxal to form pyridoxal isonicotinoyl hydrazone. We postulated that in addition to PLP-dependent inhibition of hALAS2, INH directly acts on hALAS2. Using FACS and confocal microscopy, we show here that INH reduces protoporphyrin IX accumulation in HeLa cells expressing either wild-type human hALAS2 or XLEPP variants. In addition, PLP and pyridoxamine 5'-phosphate (PMP) restored cellular hALAS2 activity in the presence of INH. Kinetic analyses with purified hALAS2 demonstrated non-competitive or uncompetitive inhibition with an apparent Ki of 1.5 uM. Circular dichroism studies revealed that INH triggers structural changes in hALAS2 that interfere with the association of hALAS2 with its PLP cofactor. These studies demonstrate that hALAS2 can be directly inhibited by INH, provide insight into the mechanism of inhibition, and support the prospective use of INH in treating patients with XLEPP and potentially other cutaneous porphyrias.
|
238 |
Audiovisual e Web Semântica: Estudo de Caso da Biblioteca da ECA / -Cavalcante, Denise Gomes Silva Morais 10 January 2019 (has links)
A navegação e recuperação entre recursos de catálogos diferentes através de tecnologias Linked Data e da web semântica pode diminuir a sobrecarga para gestão, interoperabilidade e compartilhamento de dados como forma de cooperação institucional, além disso ser modo diferente de navegação entre acervos de instituições e ambientes informacionais externos, possibilitando novas formas de consulta de dados. O objetivo desta pesquisa é identificar os instrumentos e metodologias de representação descritiva, temática e recuperação de documentos audiovisuais no contexto de bibliotecas, arquivos fímicos e da web semântica. Dessa forma, a metodologia inclui a revisão de literatura da área para estudo do estado da arte e o levantamento de tecnologias da web semântica que visam a criação de padrões de metadados, vocabulários, ontologias e modelos conceituais voltados a anotação e descrição audiovisual, assim como uma parte empírica com estudo de caso do catálogo e do manual de filmes da Biblioteca da ECA. / The navigation and retrieval between different catalog resources through Linked Data and semantic web technologies can reduce the overhead for management, interoperability and data sharing as a form of institutional cooperation, besides being a different way of navigating between collections of institutions and informational environments new ways of querying data. The objective of this research is to identify the instruments and methodologies of descriptive, thematic representation and retrieval of audiovisual documents in the context of libraries, phylogenies and the semantic web. Thus, the methodology includes the review of the literature of the area for the study of the state of the art and the survey of semantic web technologies that aim at the creation of standards of metadata, vocabularies, ontologies and conceptual models aimed at annotation and audiovisual description, as well as an empirical part with a case study of the catalog and the film manual of the ECA Library.
|
239 |
Swedish Breakeven Inflation (BEI) - a market based measure of inflation expectations?Calmvik, Jonas January 2008 (has links)
<p>The Fisher Equation suggests that the spread between nominal and real interest rates is equal to the inflation expectations. In Sweden, where both nominal and inflation linked bonds exist the fisher equation implies that the yield spread could provide investors and policymakers with important information about markets inflation expectations. The aim of this thesis is therefore to estimate whether the yield spread between Swedish nominal and real interest rates - widely referred to as the Breakeven Inflation (BEI) - is a market based measure of inflation expectations. A sample based on historical bond prices between year 2000 and 2007 is used and adjusted for 3 distortions: i) The mismatch in cash flow structure arising from different bond characteristics. ii) The inflation indexation and bond finance implications (carry). iii) The seasonality in Consumer Price Index (CPI). In the absence of “true” inflation expectations, the benchmark used for the evaluation and comparison of the unadjusted and adjusted BEI series is the survey based, Prospera Money Market Players inflationary expectations, i.e. professional forecasters. The evaluation uses two statistical measures to estimate the errors, the Root Mean Squared Error (RMSE) to estimate the size of the forecast error and the Mean Error (ME) to measure the bias or the tendency for the forecast error to point in a particular direction. The general conclusion of the study is that both the unadjusted and the adjusted BEI series have improved significantly throughout the sample period as predictors of inflation expectations.</p><p>Further, in the first half of the sample, the MEs show that the BEI tends to underestimate inflation expectations, while in the second part of the sample the direction of the errors are less univocal. However, the carry adjusted and in some extent the carry and seasonality adjusted BEI seem to improve the BEI somewhat, although the conclusions are not very convincing. When using BEI to measure inflation expectations the conclusions should also be balanced against the possible bias associated with survey based expectations.</p>
|
240 |
Swedish Breakeven Inflation (BEI) - a market based measure of inflation expectations?Calmvik, Jonas January 2008 (has links)
The Fisher Equation suggests that the spread between nominal and real interest rates is equal to the inflation expectations. In Sweden, where both nominal and inflation linked bonds exist the fisher equation implies that the yield spread could provide investors and policymakers with important information about markets inflation expectations. The aim of this thesis is therefore to estimate whether the yield spread between Swedish nominal and real interest rates - widely referred to as the Breakeven Inflation (BEI) - is a market based measure of inflation expectations. A sample based on historical bond prices between year 2000 and 2007 is used and adjusted for 3 distortions: i) The mismatch in cash flow structure arising from different bond characteristics. ii) The inflation indexation and bond finance implications (carry). iii) The seasonality in Consumer Price Index (CPI). In the absence of “true” inflation expectations, the benchmark used for the evaluation and comparison of the unadjusted and adjusted BEI series is the survey based, Prospera Money Market Players inflationary expectations, i.e. professional forecasters. The evaluation uses two statistical measures to estimate the errors, the Root Mean Squared Error (RMSE) to estimate the size of the forecast error and the Mean Error (ME) to measure the bias or the tendency for the forecast error to point in a particular direction. The general conclusion of the study is that both the unadjusted and the adjusted BEI series have improved significantly throughout the sample period as predictors of inflation expectations. Further, in the first half of the sample, the MEs show that the BEI tends to underestimate inflation expectations, while in the second part of the sample the direction of the errors are less univocal. However, the carry adjusted and in some extent the carry and seasonality adjusted BEI seem to improve the BEI somewhat, although the conclusions are not very convincing. When using BEI to measure inflation expectations the conclusions should also be balanced against the possible bias associated with survey based expectations.
|
Page generated in 0.0487 seconds