• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19691
  • 3373
  • 2417
  • 2015
  • 1551
  • 1432
  • 881
  • 406
  • 390
  • 359
  • 297
  • 237
  • 208
  • 208
  • 208
  • Tagged with
  • 38217
  • 12470
  • 9257
  • 7123
  • 6700
  • 5896
  • 5307
  • 5203
  • 4740
  • 3461
  • 3307
  • 2834
  • 2730
  • 2546
  • 2117
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Automatically Extract Information from Web Documents

Sharma, Dipesh 01 December 2007 (has links)
The Internet could be considered to be a reservoir of useful information in textual form — product catalogs, airline schedules, stock market quotations, weather forecast etc. There has been much interest in building systems that gather such information on a user's behalf. But because these information resources are formatted differently, mechanically extracting their content is difficult. Systems using such resources typically use hand-coded wrappers, customized procedures for information extraction. Structured data objects are a very important type of information on the Web. Such data objects are often records from underlying databases and displayed in Web pages with some fixed templates. Mining data records in Web pages is useful because they typically present their host pages' essential information, such as lists of products and services. Extracting these structured data objects enables one to integrate data/information from multiple Web pages to provide value-added services, e.g., comparative shopping, meta-querying and search. Web content mining has thus become an area of interest for many researchers because of the phenomenal growth of the Web contents and the economic benefits associated with it. However, due to the heterogeneity of Web pages, automated discovery of targeted information is still posing as a challenging problem.
512

Kompetenzförderung im Software Engineering durch ein mehrstufiges Lehrkonzept im Studiengang Mechatronik

Abke, Jörg, Schwirtlich, Vincent, Sedelmaier, Yvonne January 2013 (has links)
Dieser Beitrag stellt das Lehr-Lern-Konzept zur Kompetenzförderung im Software Engineering im Studiengang Mechatronik der Hochschule Aschaffenburg dar. Dieses Konzept ist mehrstufig mit Vorlesungs-, Seminar- und Projektsequenzen. Dabei werden Herausforderungen und Verbesserungspotentiale identifiziert und dargestellt. Abschließend wird ein Überblick gegeben, wie im Rahmen eines gerade gestarteten Forschungsprojektes Lehr-Lernkonzepte weiterentwickelt werden können.
513

Initial data for axially symmetric black holes with distorted apparent horizons

Tonita, Aaryn 05 1900 (has links)
The production of axisymmetric initial data for distorted black holes at a moment of time symmetry is considered within the (3+1) context of general relativity. The initial data is made to contain a distorted marginally trapped surface ensuring that, modulo cosmic censorship, the spacetime will contain a black hole. The resulting equations on the complicated domain are solved using the piecewise linear finite element method which adapts to the curved surface of the marginally trapped surface. The initial data is then analyzed to calculate the mass of the space time as well as an upper bound on the fraction of the total energy available for radiation. The families of initial data considered contain no more than few percent of the total energy available for radiation even in cases of extreme distortion. It is shown that the mass of certain initial data slices depend to first order on the area of the marginally trapped surface and the gaussian curvature of prominent features.
514

Akzeptanz und Nutzerfreundlichkeit der AusweisApp : eine qualitative Untersuchung ; eine Studie am Hasso-Plattner-Institut für Softwaresystemtechnik im Auftrag des Bundesministeriums des Innern

Asheuer, Susanne, Belgassem, Joy, Eichorn, Wiete, Leipold, Rio, Licht, Lucas, Meinel, Christoph, Schanz, Anne, Schnjakin, Maxim January 2013 (has links)
Für die vorliegende Studie »Qualitative Untersuchung zur Akzeptanz des neuen Personalausweises und Erarbeitung von Vorschlägen zur Verbesserung der Usability der Software AusweisApp« arbeitete ein Innovationsteam mit Hilfe der Design Thinking Methode an der Aufgabenstellung »Wie können wir die AusweisApp für Nutzer intuitiv und verständlich gestalten?« Zunächst wurde die Akzeptanz des neuen Personalausweises getestet. Bürger wurden zu ihrem Wissensstand und ihren Erwartungen hinsichtlich des neuen Personalausweises befragt, darüber hinaus zur generellen Nutzung des neuen Personalausweises, der Nutzung der Online-Ausweisfunktion sowie der Usability der AusweisApp. Weiterhin wurden Nutzer bei der Verwendung der aktuellen AusweisApp beobachtet und anschließend befragt. Dies erlaubte einen tiefen Einblick in ihre Bedürfnisse. Die Ergebnisse aus der qualitativen Untersuchung wurden verwendet, um Verbesserungsvorschläge für die AusweisApp zu entwickeln, die den Bedürfnissen der Bürger entsprechen. Die Vorschläge zur Optimierung der AusweisApp wurden prototypisch umgesetzt und mit potentiellen Nutzern getestet. Die Tests haben gezeigt, dass die entwickelten Neuerungen den Bürgern den Zugang zur Nutzung der Online-Ausweisfunktion deutlich vereinfachen. Im Ergebnis konnte festgestellt werden, dass der Akzeptanzgrad des neuen Personalausweises stark divergiert. Die Einstellung der Befragten reichte von Skepsis bis hin zu Befürwortung. Der neue Personalausweis ist ein Thema, das den Bürger polarisiert. Im Rahmen der Nutzertests konnten zahlreiche Verbesserungspotenziale des bestehenden Service Designs sowohl rund um den neuen Personalausweis, als auch im Zusammenhang mit der verwendeten Software aufgedeckt werden. Während der Nutzertests, die sich an die Ideen- und Prototypenphase anschlossen, konnte das Innovtionsteam seine Vorschläge iterieren und auch verifizieren. Die ausgearbeiteten Vorschläge beziehen sich auf die AusweisApp. Die neuen Funktionen umfassen im Wesentlichen: · den direkten Zugang zu den Diensteanbietern, · umfangreiche Hilfestellungen (Tooltips, FAQ, Wizard, Video), · eine Verlaufsfunktion, · einen Beispieldienst, der die Online-Ausweisfunktion erfahrbar macht. Insbesondere gilt es, den Nutzern mit der neuen Version der AusweisApp Anwendungsfelder für ihren neuen Personalausweis und einen Mehrwert zu bieten. Die Ausarbeitung von weiteren Funktionen der AusweisApp kann dazu beitragen, dass der neue Personalausweis sein volles Potenzial entfalten kann.
515

Enriching raw events to enable process intelligence : research challenges

Herzberg, Nico, Weske, Mathias January 2013 (has links)
Business processes are performed within a company’s daily business. Thereby, valuable data about the process execution is produced. The quantity and quality of this data is very dependent on the process execution environment that reaches from predominantly manual to fullautomated. Process improvement is one essential cornerstone of business process management to ensure companies’ competitiveness and relies on information about the process execution. Especially in manual process environments data directly related to the process execution is rather sparse and incomplete. In this paper, we present an approach that supports the usage and enrichment of process execution data with context data – data that exists orthogonally to business process data – and knowledge from the corresponding process models to provide a high-quality event base for process intelligence subsuming, among others, process monitoring, process analysis, and process mining. Further, we discuss open issues and challenges that are subject to our future work. / Die wertschöpfenden Tätigkeiten in Unternehmen folgen definierten Geschäftsprozessen und werden entsprechend ausgeführt. Dabei werden wertvolle Daten über die Prozessausführung erzeugt. Die Menge und Qualität dieser Daten ist sehr stark von der Prozessausführungsumgebung abhängig, welche überwiegend manuell als auch vollautomatisiert sein kann. Die stetige Verbesserung von Prozessen ist einer der Hauptpfeiler des Business Process Managements, mit der Aufgabe die Wettbewerbsfähigkeit von Unternehmen zu sichern und zu steigern. Um Prozesse zu verbessern muss man diese analysieren und ist auf Daten der Prozessausführung angewiesen. Speziell bei manueller Prozessausführung sind die Daten nur selten direkt zur konkreten Prozessausführung verknüpft. In dieser Arbeit präsentieren wir einen Ansatz zur Verwendung und Anreicherung von Prozessausführungsdaten mit Kontextdaten – Daten die unabhängig zu den Prozessdaten existieren – und Wissen aus den dazugehörigen Prozessmodellen, um ein hochwertige Event- Datenbasis für Process Intelligence Anwendungen, wie zum Beispiel Prozessmonitoring, Prozessanalyse und Process Mining, sicherstellen zu können. Des Weiteren zeigen wir offene Fragestellungen und Herausforderungen auf, welche in Zukunft Gegenstand unserer Forschung sein werden.
516

Automated annotation of protein families / Automatiserad annotering av proteinfamiljer

Elfving, Eric January 2011 (has links)
Introduction: The great challenge in bioinformatics is data integration. The amount of available data is always increasing and there are no common unified standards of where, or how, the data should be stored. The aim of this workis to build an automated tool to annotate the different member families within the protein superfamily of medium-chain dehydrogenases/reductases (MDR), by finding common properties among the member proteins. The goal is to increase the understanding of the MDR superfamily as well as the different member families.This will add to the amount of knowledge gained for free when a new, unannotated, protein is matched as a member to a specific MDR member family. Method: The different types of data available all needed different handling. Textual data was mainly compared as strings while numeric data needed some special handling such as statistical calculations. Ontological data was handled as tree nodes where ancestry between terms had to be considered. This was implemented as a plugin-based system to make the tool easy to extend with additional data sources of different types. Results: The biggest challenge was data incompleteness yielding little (or no) results for some families and thus decreasing the statistical significance of the results. Results show that all the human and mouse MDR members have a Pfam ADH domain (ADH_N and/or ADH_zinc_N) and takes part in an oxidation-reduction process, often with NAD or NADP as cofactor. Many of the proteins contain zinc and are expressed in liver tissue. Conclusions: A python based tool for automatic annotation has been created to annotate the different MDR member families. The tool is easily extendable to be used with new databases and much of the results agrees with information found in literature. The utility and necessity of this system, as well as the quality of its produced results, are expected to only increase over time, even if no additional extensions are produced, as the system itself is able to make further and more detailed inferences as more and more data become available.
517

Data replication in mobile computing

Pamplona, Rodrigo Christovam January 2010 (has links)
With the advances of technology and the popularization of mobile devices, the need of researching and discussing subjects related to mobile devices has raised. One of the subjects that needs to be further analyzed is data replication. This study investigates data replication on mobile devices focusing on power consumption. It presents four different scenarios that propose, describe, apply and evaluate data replication mechanisms, with the purpose of finding the best scenario that presents less energy consumption. In order to make the experiments, Sun SPOT was chosen as a mobile device. This device is fully programmed in a java environment. A different software was created in each scenario in order to verify the performance of the mobile devices regarding energy saving. The results found did not meet the expectations. While trying to find the best scenario a hardware limitation was found. Although software can be easily changed to fix errors, hardware cannot be changed as easily. The implications for the hardware limitation found in this study prevented the results to be optimal. The results found also imply that new hardware should be used in further experimentation. As this study proved to be limited, it suggests that additional studies should be carried out applying the new version of the hardware used in this study.
518

Android-Based Information Synchronization in Social Networks

Ji, Yu-Shin 26 July 2010 (has links)
In the beginning, computers are developed in need of complex computing. And it has been evolved from mainframes in enterprise and computer center to desktop at home. After the rapidly spreading of internet, computer undertakes an important role in people¡¦s life. It helps us with our work, doing computation, even links people together through E-mail and instant message software. With computers by our side, we live a much more convenient life. Google announced its latest mobile platform operating system ¡§Android¡¨ on November 5th, 2007. Android is constructed base on Linux kernel, which means it can be treated as a portable computer, with application designed for entertaining, Internet surfing, and social communication. Social communication has been proven to be a important issue. For instance, Facebook, a social network service provider online since February 4th, 2004, creates a place to let people share their messages, photos and news, has more than ten million user today. So far the mobile phone has become a very convenient way to communicate, but if users wish to share photos, music or documents to friends. This thesis is going to discover a new way to share views, photos, music with friends immediately. This paper describes the issues with data synchronization between mobile phones.
519

Modeling covariance structure in unbalanced longitudinal data

Chen, Min 15 May 2009 (has links)
Modeling covariance structure is important for efficient estimation in longitudinal data models. Modified Cholesky decomposition (Pourahmadi, 1999) is used as an unconstrained reparameterization of the covariance matrix. The resulting new parameters have transparent statistical interpretations and are easily modeled using covariates. However, this approach is not directly applicable when the longitudinal data are unbalanced, because a Cholesky factorization for observed data that is coherent across all subjects usually does not exist. We overcome this difficulty by treating the problem as a missing data problem and employing a generalized EM algorithm to compute the ML estimators. We study the covariance matrices in both fixed-effects models and mixed-effects models for unbalanced longitudinal data. We illustrate our method by reanalyzing Kenwards (1987) cattle data and conducting simulation studies.
520

Privacy-preserving data mining

Zhang, Nan 15 May 2009 (has links)
In the research of privacy-preserving data mining, we address issues related to extracting knowledge from large amounts of data without violating the privacy of the data owners. In this study, we first introduce an integrated baseline architecture, design principles, and implementation techniques for privacy-preserving data mining systems. We then discuss the key components of privacy-preserving data mining systems which include three protocols: data collection, inference control, and information sharing. We present and compare strategies for realizing these protocols. Theoretical analysis and experimental evaluation show that our protocols can generate accurate data mining models while protecting the privacy of the data being mined.

Page generated in 0.0824 seconds