• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

SLACID - Sparse Linear Algebra in a Column-Oriented In-Memory Database System

Kernert, David, Köhler, Frank, Lehner, Wolfgang 19 September 2022 (has links)
Scientific computations and analytical business applications are often based on linear algebra operations on large, sparse matrices. With the hardware shift of the primary storage from disc into memory it is now feasible to execute linear algebra queries directly in the database engine. This paper presents and compares different approaches of storing sparse matrices in an in-memory column-oriented database system. We show that a system layout derived from the compressed sparse row representation integrates well with a columnar database design and that the resulting architecture is moreover amenable to a wide range of non-numerical use cases when dictionary encoding is used. Dynamic matrix manipulation operations, like online insertion or deletion of elements, are not covered by most linear algebra frameworks. Therefore, we present a hybrid architecture that consists of a read-optimized main and a write-optimized delta structure and evaluate the performance for dynamic sparse matrix workloads by applying workflows of nuclear science and network graphs.
662

Using ontologies to semantify a Web information portal

Chimamiwa, Gibson 01 1900 (has links)
Ontology, an explicit specification of a shared conceptualisation, captures knowledge about a specific domain of interest. The realisation of ontologies, revolutionised the way data stored in relational databases is accessed and manipulated through ontology and database integration. When integrating ontologies with relational databases, several choices exist regarding aspects such as database implementation, ontology language features, and mappings. However, it is unclear which aspects are relevant and when they affect specific choices. This imposes difficulties in deciding which choices to make and their implications on ontology and database integration solutions. Within this study, a decision-making tool that guides users when selecting a technology and developing a solution that integrates ontologies with relational databases is developed. A theory analysis is conducted to determine current status of technologies that integrate ontologies with databases. Furthermore, a theoretical study is conducted to determine important features affecting ontology and database integration, ontology language features, and choices that one needs to make given each technology. Based on the building blocks stated above, an artifact-building approach is used to develop the decision-making tool, and this tool is verified through a proof-of-concept to prove the usefulness thereof. Key terms: Ontology, semantics, relational database, ontology and database integration, mapping, Web information portal. / Information Science / M. Sc. (Information Systems)
663

Understanding Usability-related Information Security Failures in a Healthcare Context

Boyer, Edward D 24 September 2014 (has links)
This research study explores how the nature and type of usability failures impact task performance in a healthcare organization. Healthcare organizations are composed of heterogeneous and disparate information systems intertwined with complex business processes that create many challenges for the users of the system. The manner in which Information Technology systems and products are implemented along with the overlapping intricate tasks the users have pose problems in the area of usability. Usability research primarily focuses on the user interface; therefore, designing a better interface often leaves security in question. When usability failures arise from the incongruence between healthcare task and the technology used in healthcare organizations, the security of information is jeopardized. Hence, the research problem is to understand the nature and types of usability-related security failures and how they can be reduced in a Healthcare Information System. This research used a positivist single case study design with embedded units, to understand the nature and type of usability-related information systems security failures in a Healthcare context. The nature and types of usability failures were identified following a four-step data analysis process that used terms that defined (1) user failures in a large healthcare organization, (2) Task Technology Fit theory, (3) the Confidentiality Integrity and Availability triad of information protection that captured usability-related information system security failures, and (4) by conducting semi-structured interviews with users of the Healthcare Information System capturing and recording their interactions with the usability failure. The captured reported usability-related information system security failures dated back five years within a healthcare organization consisting of a network of 128 medical centers. The evaluation of five years of data and over 8,000 problems reported by healthcare workers allowed this research to identify the misalignment of healthcare task to the technology used, and how the misalignment impacted both information security and user performance. The nature of usability failures were centered on technical controls, however, the cause of the failures was predominately information integrity failures and the unavailability of applications and systems. Usability-related information system security failures are primarily not recognized due to the nature of healthcare task along with the methods healthcare workers use to mitigate such failures by employing workarounds to complete a task. Applying non-technical security controls within the development process provides the clearest path to addressing throughout the organization the captured usability-related information system security failures.
664

A user's guide for financial statements of African companies

Duncan, Ashley John 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: The Africa Centre for Investment Analysis (AClA) at the University of Stellenbosch is undertaking the creation and maintaining of a capital markets database on historical financial and market data of all companies listed on various African stock exchanges (excluding South Africa). This study report aims at establishing a user's guide for the Centre's financial statement database in anticipation that the database will become a comprehensive source of vital market and financial information for investors in Africa. The guide describes the common format that was created so that African companies can be easily compared. The guide clarifies the standardised coding system that was created so that future users are able to access relevant data, and attempts to facilitate the ease of maintaining and developing the database. Some of the coding described in the guide is not original, since an adequate coding system is already available and in use at the Centre, but forms part of this study project since no formal or documented guide to its operation and implementation is available. The guide describes the classification and coding system used for the various countries, industries and companies on the Centre's database. The industry classification system that was developed is not as comprehensive as the global industry classification standard that it is based on, but is adequate to fairly describe the core activities of African companies. The guide introduces the standardised financial statement templates that are (to be) used on the Centre's database, and also describes the individual line items on these. The templates created are such that the annual financial statements of African companies, in conformity with international generally accepted accounting practice, fairly present the state of affairs of the African companies and their businesses. Templates for balance sheets, income statements, and cash flow statements for the companies have been created. The value of the information on the database is based on the soundness of the data reported in the African company's annual reports, and the interpretation of these when being captured onto the database. The definitions listed in this study report serve merely as a guideline to compensate for the differing accounting rules and practices that exist between countries. Not all listed African companies are recorded on the database. This is because the Centre relies on the contribution of data (like annual financial reports) from African stock exchanges, stockbrokers and the individual companies themselves. The importance of encouraging all African stakeholders to contribute as much information as possible, in order to ensure that comparable data is collected, is vital to the successful development and use of the database. / AFRIKAANSE OPSOMMING: Die Afrikasentrum vir Beleggingsontleding aan die Universiteit van Stellenbosch is besig met die daarstelling en instandhouding van 'n databasis van kapitaalmarkte van historiese finansiële- en markinligting van alle maatskappye op verskeie Afrika effektebeurse (Suid-Afrika uitgesluit). Hierdie ondersoekverslag beoog om 'n gebruikersgids saam te stel vir die sentrum se finansiële databasis met die verwagting dat die databasis 'n omvattende bron van mark- en finansiële inligting vir beleggers in Afrika sal word. Die gids verklaar voorts ook die gestandaardiseerde kodestelsel wat ontwikkel was om toekomstige gebruikers toegang tot relevante data te gee. Die gids poog ook om die instandhouding en verdere ontwikkeling van die databasis te vergemaklik. Sommige van die kodefisering wat in die gids beskryf word, is nie oorspronklik nie aangesien 'n voldoende kodestelsel reeds beskikbaar en in gebruik is in die sentrum. Dit vorm egter deel van hierdie studieprojek aangesien geen formele of gedokumenteerde gids vir die databasis se gebruik en implementering beskikbaar is nie. Die gids beskryf die klassifikasie en kodestelsel vir die verskeie lande, industrieë en maatskappye wat op die sentrum se databasis gebruik word. Die klassifikasiestelsel vir industrieë wat ontwikkel is, is nie so omvattend soos die globale industrieklassifikasiestandaard waarop dit gebaseer is nie, maar dit is genoegsaam om 'n redelike beskrywing van die kernaktiwiteite van Afrika se maatskappye te gee. Die gids stel die gestandaardiseerde finansiële patroon wat op die sentrum se databasis gebruik word (en gebruik sal word) bekend en dit beskryf ook die individuele lynitems daarop. Die patrone wat sodanig geskep word gee 'n redelike beeld van die jaarlikse finansiële state van Afrika se maatskappye in ooreenstemming met internasionale algemene aanvaarde boekhoupraktyke. Patrone vir balansstate, inkomstestate en kontantvloeistate vir die maatskappye is geskep. Die waarde van die inligting op die databasis is gebaseer op die egtheid van die data beskikbaar in die Afrikamaatskappye se jaarverslae en die interpretasie daarvan wanneer dit op die datastelsel vasgelê word. Die definisies wat in die studieverslag voorkom, dien slegs as 'n handleiding om te vergoed vir die verskille in boekhoureëls- en gebruike wat in verskillende lande bestaan. Alle Afrikalande wat op die effektebeurs is, is nie ingesluit op die databasis nie aangesien die sentrum op die verskaffing van inligting op Afrika se effektebeurse, makelaars en individuele maatskappye aangewese is. Die belangrikheid om alle Afrika rolspelers aan te moedig om soveel inligting as moontlik by te dra, is deurslaggewend tot die suksesvolle ontwikkeling en gebruik van die databasis.
665

Η χρήση των Γεωγραφικών Συστημάτων Πληροφοριών στην κατασκευή βάσης υδρογεωλογικών δεδομένων

Κουζέλη, Ευλαμπία 09 January 2014 (has links)
Η εργασία αυτή εκπονήθηκε στα πλαίσια του Μεταπτυχιακού Προγράμματος Σπουδών «Εφαρμοσμένη και Περιβαλλοντική Γεωλογία και Γεωφυσική». Αντικείμενο της εργασίας είναι η εφαρμογή των Γεωγραφικών Συστημάτων Πληροφοριών στο σχεδιασμό μιας Γεωγραφικής Βάσης Υδρογεωλογικών δεδομένων με εφαρμογή στο νομό Αιτωλοακαρνανίας. Η βάση αυτή θα αποτελέσει ένα εργαλείο με το οποίο θα γίνεται η διαχείριση μεγάλου όγκου υδρογεωλογικών πληροφοριών (στάθμες, στοιχεία ποιότητας των νερών κ.α.) με τρόπο απλό και γρήγορο. Τα πρώτο στάδιο της εργασίας ήταν ο καθορισμός των στόχων που έπρεπε να επιτευχθούν μέσω αυτής για τη σωστή δημιουργία της γεωβάσης. Το επόμενο βήμα έρχεται να γίνει με τη συλλογή των δεδομένων που θα τοποθετηθούν στη βάση αυτή. Οι πληροφορίες αυτές προέρχονται από εγκεκριμένες τεχνικές μελέτες που δόθηκαν από δημόσιες υπηρεσίες, ιδιώτες ή ήταν ανηρτημένες σε επίσημες κυβερνητικές ιστοσελίδες. Τα στοιχεία αυτά ήταν τόσο χωρικά (διοικητικά όρια νομών, υδρολογικές λεκάνες κ.α.) όσο και μη - χωρικά (περιγραφικές πληροφορίες όπως πληθυσμιακά δεδομένα ονομασία ή κωδικοποίηση λεκανών, υδροχημικές μετρήσεις κ.α.). Κάποια στοιχεία αποκτήθηκαν από την επί τόπου έρευνα που έγινε στα πλαίσια αυτής της εργασίας σε ένα κομμάτι της υδρολογικής λεκάνης της λίμνης Τριχωνίδας. Στη συνέχεια δημιουργείται η βάση χωρικών δεδομένων. Για να επιτευχθεί αυτός ο στόχος έπρεπε τα συλλεγμένα δεδομένα να ταξινομηθούν σε διαφορετικούς φακέλους ανάλογα με τις κοινές ιδιότητες που έχουν καθώς και σε διαφορετικό είδος αρχείων, ανάλογα με την επιθυμητή χρήση τους. Ακολουθεί, σε ένα μέρος αυτών, η δημιουργία υπέρ – συνδέσεων οι οποίες έχουν ως στόχο την άμεση πρόσβαση σε πίνακες βροχομετρικών και χημικών δεδομένων. Έπειτα με τη βοήθεια της γλώσσας SQL έχουν τεθεί ερωτήματα (queries) που οδηγούν στην ανάκτηση δεδομένων. Τέλος, δεν πρέπει να παραλείψουμε και το κομμάτι της επεξεργασίας μέρους των δεδομένων για την παραγωγή χαρτών και διαγραμμάτων. Αυτό πραγματοποιήθηκε τη βοήθεια εργαλείων του ArcMap. Σημαντικό είναι και το γεγονός ότι για τη δημιουργία των χαρτών έγινε χρήση διαφόρων μεθόδων ώστε να έχουμε τη δυνατότητα σύγκρισης των αποτελεσμάτων για την ορθή διεξαγωγή συμπερασμάτων. Η κατασκευή της βάσης αυτής είχε ως αποτέλεσμα την αποθήκευση ενός μεγάλου όγκου δεδομένων που είναι ταξινομημένα σε διαφορετικά επίπεδα και την παραγωγή χαρτών που καλύπτουν ένα μεγάλο εύρος των προαναφερθέντων θεματικών επιπέδων. / This project was produced within the Post-graduate studies program “Applied and Environmental Geology and Geophysics”. Object of this work is the application of Geographic Information Systems in designing a geographic hydro-geological data base with application to the Aitoloakarnania district. This data base will be used as a tool with which we will be able to manage a vast amount of hydro-geological information (elevation, water quality data etc.) in a very simple and fast way. The first stage of the study was to define the objectives to be achieved through this, for the proper creation of the geo-database. The next step was the collection of the data that would be added in this data base. This information came from approved technical studies provided by public services, individuals or was posted on official governmental websites. These elements were spatial (administrative district boundaries, basins, etc.) and also aspatial (descriptive information such as demographic data, basins names or codes, hydro-chemical measurements, etc). Some other elements were obtained by in - situ investigation, conducted for this project at a part of the hydrological basin of Trichonida lake. Next the spatial data base was produced. In order to achieve this we had to put the collected data in different folders according to their common properties and in different file types according to their desired use. Then a part of these files were used to make hyper- links whose objective was to give direct access to tables of precipitation and chemical data. With the SQL, various queries have been set in order to lead to the data retrieval. Finally, we must not omit the process of a part of the data for the production of maps and diagrams for which we used the ArcMap tools. An important fact is that for the production of these maps, we used various methods so that we can have the possibility to compare the results and make correct conclusions. The results of this data base production are, the storage of a vast quantity of data that is classified in different levels and the production of maps that cover a big part of all the above mentioned thematic levels.
666

3D facial feature extraction and recognition : an investigation of 3D face recognition : correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques

Al-Qatawneh, Sokyna M. S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
667

Database metadata requirements for automated web development : a case study using PHP

Mgheder, Mohamed Ahmed January 2009 (has links)
The Web has come a long way. It started as a distributed document repository and quickly became the spring board for a new type of application. Propped on top of the original HTML+HTTP architecture, this new application platform shifted the way the architecture was used so that commands and functionality were embedded in the form data of Web requests rather than in the HTTP command conveying the request. This approach enabled Web requests to convey any type of data, not just document operations. This is occurring because the Web provides such a powerful platform on which to create applications. This is occurring because web development methods are still evolving toward the structure and stability required taking on this enormous new role. As the needs of developers change, certain themes that arise more frequently than others become embedded into new environments to support those needs. Until recently, Web application programming has largely been done with a set of keywords and metaphors developed long before the Web became a popular place to program. APIs have been developed to support Web specific features, but they are no replacement for fundamental changes in the programming environment itself. The growth of Web applications requires a new type of programming designed specifically for the needs of the Web. This thesis aims to contribute towards the development of an abstract framework to generate abstract and dynamic Web user interfaces that are not developed to a specific platform. To meet this aim, this thesis suggests a general implementation of a prototype system that uses the information in database metadata in conjunction with PHP. Database metadata is richer in providing the information needed to build dynamic user interfaces. This thesis uses PHP and the abstract library ADOdb to provide us with a generalised database metadata based prototype. PHP does not have any restrictions on accessing and extracting database metadata from numerous database management systems. As a result, PHP and relational database were used to build the proposed framework. Additionally, ADOdb was used to link the two mentioned technologies. The implemented framework in this thesis demonstrates that it is possible to generate different automatic Web entry forms that are not specific at any platform.
668

Integrating programming languages and databases via program analysis and language design

Wiedermann, Benjamin Alan 23 August 2010 (has links)
Researchers and practitioners alike have long sought to integrate programming languages and databases. Today's integration solutions focus on the data-types of the two domains, but today's programs lack transparency. A transparently persistent program operates over all objects in a uniform manner, regardless of whether those objects reside in memory or in a database. Transparency increases modularity and lowers the barrier of adoption in industry. Unfortunately, fully transparent programs perform so poorly that no one writes them. The goal of this dissertation is to increase the performance of these programs to make transparent persistence a viable programming paradigm. This dissertation contributes two novel techniques that integrate programming languages and databases. Our first contribution--called query extraction--is based purely on program analysis. Query extraction analyzes a transparent, object-oriented program that retrieves and filters collections of objects. Some of these objects may be persistent, in which case the program contains implicit queries of persistent data. Our interprocedural program analysis extracts these queries from the program, translates them to explicit queries, and transforms the transparent program into an equivalent one that contains the explicit queries. Query extraction enables programmers to write programs in a familiar, modular style and to rely on the compiler to transform their program into one that performs well. Our second contribution--called RBI-DB+--is an extension of a new programming language construct called a batch block. A batch block provides a syntactic barrier around transparent code. It also provides a latency guarantee: If the batch block compiles, then the code that appears in it requires only one client-server communication trip. Researchers previously have proposed batch blocks for databases. However, batch blocks cannot be modularized or composed, and database batch blocks do not permit programmers to modify persistent data. We extend database batch blocks to address these concerns and formalize the results. Today's technologies integrate the data-types of programming languages and databases, but they discourage programmers from using procedural abstraction. Our contributions restore procedural abstraction's use in enterprise applications, without sacrificing performance. We argue that industry should combine our contributions with data-type integration. The result would be a robust, practical integration of programming languages and databases. / text
669

Structure determinations of natural products and related molecules.

Camou-Arriola, Fernando Alberto Josue. January 1989 (has links)
Structures were determined for 48 new natural products and several related compounds by NMR methods. One new natural product and two unnatural product structures were determined by X-ray diffraction. Molecular mechanics calculations on two indoles related to the neurotransmitter serotonin and on some synthetic cyclophanes were used to gain information about their preferred conformations. Considerable time is wasted redetermining the structures of known natural products when they are encountered in new sources. To help alleviate this problem, a database which searches on proton NMR chemical shifts was developed.
670

Database Forensics in the Service of Information Accountability

Pavlou, Kyriacos Eleftheriou January 2012 (has links)
Regulations and societal expectations have recently emphasized the need to mediate access to valuable databases, even by insiders. At one end of a spectrum is the approach of restricting access to information; at the other is information accountability. The focus of this work is on effecting information accountability of data stored in relational databases. One way to ensure appropriate use and thus end-to-end accountability of such information is through continuous assurance technology, via tamper detection in databases built upon cryptographic hashing. We show how to achieve information accountability by developing and refining the necessary approaches and ideas to support accountability in high-performance databases. These concepts include the design of a reference architecture for information accountability and several of its variants, the development of a sequence of successively more sophisticated forensic analysis algorithms and their forensic cost model, and a systematic formulation of forensic analysis for determining when the tampering occurred and what data were tampered with. We derive a lower bound for the forensic cost and prove that some of the algorithms are optimal under certain circumstances. We introduce a comprehensive taxonomy of the types of possible corruption events, along with an associated forensic analysis protocol that consolidates all extant forensic algorithms and the corresponding type(s) of corruption events they detect. Finally, we show how our information accountability solution can be used for databases residing in the cloud. In order to evaluate our ideas we design and implement an integrated tamper detection and forensic analysis system named DRAGOON. This work shows that information accountability is a viable alternative to information restriction for ensuring the correct storage, use, and maintenance of high-performance relational databases.

Page generated in 0.0601 seconds