• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Views in Z

Luke Wildman Unknown Date (has links)
A specification of a software program, hardware component, or system, is a description of what the system is required to do without describing how it is to be done. Specifications provide the necessary details for system developers, suppliers, users and regulators to understand and agree upon the requirements of a system. It is therefore vital that specifications are clear, concise, complete, and are free of ambiguity and inconsistency. Specifications are usually expressed using a combination of informal natural language descriptions, diagrams, and formal mathematical techniques. The degree to which formal mathematics is used depends on the nature of the application and the criticality of the function being described. In industries where the cost of a system or software failure is high, such as national defence and government, banking, transport, energy, and communication, and some manufacturing industries, formal specification is recommended because it offers greater clarity and consistency, and moreover, formal specification are machine readable, allowing some automated checking to be applied. However, poorly written formal specifications can be less useful than informal specifications if they are unreadable (or not clear), or if they are overly large or complex (or not concise), making it hard to determine whether they are consistent or complete. In particular, if the system itself is large or complex, or it features multiple and diverse aspects of behaviour, it can be difficult to capture all aspects of its behaviour clearly and concisely in a monolithic formal model, or within a single formal notation. In many cases this is because the modeling approach may be particularly suited to some aspects of the system but not to others. The widely accepted solution to this problem is to use diverse modeling techniques to specify the different aspects of the system from different viewpoints. This results in a number of view specifications that taken together make up the complete specification of the system. The thesis introduces structuring mechanisms for the formal specification language Z that allow the view specifications of a system to be described, combined and reused. Specification encapsulation and parameter abstraction and application are explored along with object-oriented concepts such sub-typing and sub-classing. Two case studies, which are based on a language-based editor and a database system, are provided to illustrate how the techniques developed in this thesis may be used.
442

Approche d'extension de méthodes fondée sur l'utilisation de composants génériques

Deneckere, Rebecca 04 January 2001 (has links) (PDF)
Ce mémoire s'inscrit dans le domaine de l'ingénierie des méthodes et plus spécialement dans celui de l'ingénierie des méthodes situationnelles. Il propose une technique de construction de méthodes, dite d'Extension, permettant d'adapter une méthode d'analyse existante aux besoins spécifiques du projet en cours. L'approche proposée met en œuvre les trois points suivants : Une technique de représentation de la connaissance sous forme de patrons décrits par différents langages de description et de manipulation. Une technique d'organisation des patrons à l'aide de cartes d'extension facilitant le guidage lors de la réutilisation de patrons. Ces cartes sont des modèles de processus exécutables permettant d'étendre une méthode orientée objet à un domaine d'application spécifique. Elles permettent de gérer la cohérence de la méthode en prenant en compte les contraintes de précédence associées à chaque patron d'extension. Une technique de conception à l'aide de méta-patrons génériques permettant de faciliter le travail de l'ingénieur de méthodes lors de la conception d'une carte d'extension. Ces méta-patrons ont été générés suite à l'inventaire de toutes les stratégies possibles pouvant se trouver sur une carte d'extension et toute la connaissance relative à chacune d'entre elles est encapsulée dans un méta-patron spécifique. Cette approche permet donc d'offrir à l'ingénieur de méthodes un processus de guidage, tant au niveau de l'exécution des patrons d'extension (carte) qu'au niveau de leur conception (méta-patron).
443

Proposition d'un cadre générique d'optimisation de requêtes dans les environnements hétérogènes et répartis

Liu, Tianxiao 06 June 2011 (has links) (PDF)
Dans cette thèse, nous proposons un cadre générique d'optimisation de requêtes dans les environnements hétérogènes répartis. Nous proposons un modèle générique de description de sources (GSD), qui permet de décrire tous les types d'informations liées au traitement et à l'optimisation de requêtes. Avec ce modèle, nous pouvons en particulier obtenir les informations de coût afin de calculer le coût des différents plans d'exécution. Notre cadre générique d'optimisation fournit les fonctions unitaires permettant de mettre en œuvre les procédures d'optimisation en appliquant différentes stratégies de recherche. Nos résultats expérimentaux mettent en évidence la précision du calcul de coût avec le modèle GSD et la flexibilité de notre cadre générique d'optimisation lors du changement de stratégie de recherche. Notre cadre générique d'optimisation a été mis en œuvre et intégré dans un produit d'intégration de données (DVS) commercialisé par l'entreprise Xcalia - Progress Software Corporation. Pour des requêtes contenant beaucoup de jointures inter-site et interrogeant des sources de grand volume, le temps de calcul du plan optimal est de l'ordre de 2 secondes et le temps d'exécution du plan optimal est réduit de 28 fois par rapport au plan initial non optimisé.
444

Optimisation des mises à jours XML pour les systèmes main-memory: implémentation et expériences.

Marina, Sahakyan 17 November 2011 (has links) (PDF)
Cette Thèse propose des techniques permettant l'optimisation de mises à jour XML via la projection de données.
445

Ett internetbaserat bokningssystem för hotellrum

Pagil, Paul, Pettersson, Fredrik January 2006 (has links)
<p>Berghs Företagshotell i Ockelbo hyr bland annat ut övernattningsrum och lägenheter likt ett vandrarhem. Problemet är att de använder sig av ett enkelt ”excel-ark” och enstaka ”post-it” lappar för att hålla koll på sina bokningar och är i behov av ett bättre system. Ägaren har länge funderat på att ordna detta själv då han har gott om databas- och programmeringskunskaper, men precis som alla andra småföretagare har han tyvärr för lite tid. Av den anledningen tyckte vi att detta skulle vara ett perfekt examensarbete för oss. Vi skapar helt enkelt ett webbaserat bokningssystem med en applikation att knyta till hotellets befintliga webbplats. Till detta skapar vi även en ett administrationsgränssnitt där all hantering av bokningar och rum sker. Som databashanterare använder vi oss av MS SQL Server 2000 med lagrade procedurer och som utvecklingsmiljö väljer vi MS Visual Studio.NET 2003 vilket resulterat i ett antal ASP.NET webbformulär. Till skillnad från traditionella hotell (där de flesta rum ser likadana ut) kan man här iordningställa rum och lägenheter helt efter kundens behov. Detta beror på att hotellet ligger i en f.d. hyresfastighet med 14 lägenheter, där varje lägenhet lätt kan bli uppdelade i ett antal rum. Detta flexibla system överförde vi till datorn på så vis att man får ”bygga” och ”bygga om” lägenheterna till bokningsbara rum direkt i applikationen. Det finns också en funktion som ”döljer” rummet i databasen, vilket kan komma till nytta eftersom han även kan hyra ut rummen som kontor. Rummen blir på så vis inte bokningsbara i hotellverksamheten.</p>
446

A browser-based tool for designing query interfaces to scientific databases

Newsome, Mark Ronald, 1960- 15 November 1996 (has links)
Scientists in the biological sciences need to retrieve information from a variety of data collections, traditionally maintained in SQL databases, in order to conduct research. Because current assistant tools are designed primarily for business and financial users, scientists have been forced to use the notoriously difficult command-line SQL interface, supplied as standard by most database vendors. The goal of our research has been to establish the requirements of scientific researchers and develop specialized query assistance tools to help them query data collections across the Internet. This thesis describes our work in developing HyperSQL, a Web-to-database scripting language, and most importantly, Query Designer, a user-oriented tool for designing query interfaces directly on Web browsers. Current browsers (i.e., Netscape, Internet Explorer) do not easily interoperate with databases without extensive "CGI" (Common Gateway Interface) programming. HyperSQL is a scripting language that enables database administrators to construct forms-based query interfaces intended for end-users who are not proficient with SQL. Query results are formatted as hypertext-clickable links which can be used to browse the database for related information, bring up Web pages, or access remote search engines. HyperSQL query interfaces are independent of the database computer, making it possible to construct different interfaces targeting distinct groups of users. Capitalizing on our experience with HyperSQL, we developed Query Designer, a user-oriented tool for building query interfaces directly on Web browsers. No experience in SQL and HTML programming is necessary. After choosing a target database, the user can build a personalized query interface by making menu selections and filling out forms--the tool automatically establishes network connections, and composes HTML and SQL code. The automatically generated query form can be used immediately to issue a query, customized, or saved for later use. Results returned from the database are dynamically formatted into hypertext for navigating related information in the database. / Graduation date: 1997
447

Logging and Recovery in a Highly Concurrent Database

Keen, John S. 01 June 1994 (has links)
This report addresses the problem of fault tolerance to system failures for database systems that are to run on highly concurrent computers. It assumes that, in general, an application may have a wide distribution in the lifetimes of its transactions. Logging remains the method of choice for ensuring fault tolerance. Generational garbage collection techniques manage the limited disk space reserved for log information; this technique does not require periodic checkpoints and is well suited for applications with a broad range of transaction lifetimes. An arbitrarily large collection of parallel log streams provide the necessary disk bandwidth.
448

Types for Detecting XML Query-Update Independence

Ulliana, Federico 12 December 2012 (has links) (PDF)
Pendant la dernière décennie, le format de données XML est devenu l'un des principaux moyens de représentation et d'échange de données sur le Web. La détection de l'indépendance entre une requête et une mise à jour, qui a lieu en absence d'impact d'une mise à jour sur une requête, est un problème crucial pour la gestion efficace de tâches comme la maintenance des vues, le contrôle de concurrence et de sécurité. Cette thèse présente une nouvelle technique d'analyse statique pour détecter l'indépendance entre requête et mise à jour XML, dans le cas où les données sont typées par un schéma. La contribution de la thèse repose sur une notion de type plus riche que celle employée jusqu'ici dans la littérature. Au lieu de caractériser les éléments d'un document XML utiles ou touchés par une requête ou mise à jour en utilisant un ensemble d'étiquettes, ceux-ci sont caractérisés par un ensemble de chaînes d'étiquettes, correspondants aux chemins parcourus pendant l'évaluation de l'expression dans un document valide pour le schéma. L'analyse d'indépendance résulte du développement d'un système d'inférence de type pour les chaînes. Cette analyse précise soulève une question importante et difficile liés aux schémas récursifs: un ensemble infini de chaînes pouvant être inférées dans ce cas, est-il possible et comment se ramener à une analyse effective donc finie. Cette thèse présente donc une technique d'approximation correcte et complète assurant une analyse finie. L'analyse de cette technique a conduit à développer des algorithmes pour une implantation efficace de l'analyse, et de mener une large série de tests validant à la fois la qualité de l'approche et son efficacité.
449

Rights to Software and Databases : From a Swedish Consulting Perspective / Rätt till Mjukvara och Databaser : Ur ett svenskt konsultingperspektiv

Nilsson, Ola January 2009 (has links)
In recent times companies have been forced to become more and more digitalized in order to spread company information and facilitate communication with clients, con-sumers and their own employees. The knowledge to integrate software and launch the company into the digital world cannot always be found within the company itself. Therefore, companies often resort to employing consulting companies to enable this for them. Because of copyright, the software created does not solely belong to the employing company – the intellectual property rights automatically stay with the con-sulting company that made it. When the consulting company omits details concerning intellectual property rights in the employment contract, the standard rules in the Swedish Copyright Act and the international directives kick in and give the consulting company the full rights to the programmes that it has created – with a few exceptions. The employing company may only alter the software in order to ensure that it is fully compatible with the al-ready existing programmes it utilises and the operating system it uses. Even reverse engineering is permitted as long as the information gathered is only used for ensuring the compatibility. Information in databases is protected as it is creatively arranged in systematic or me-thodical way by the one that has made a substantial investment in obtaining, verifying or presenting the information. The substantial investment depends on the one that has taken the risk of investing in the particular database. As databases are rarely made by consulting companies on behalf of a client, and the rules are sufficiently clear as to whom the ownership of the database is, there are few questions concerning data-bases. Because of this, the assumption would be that the current legislation is work-ing properly. One of the more troubling issues in regards to copyright is that even though reverse engineering is illegal, proving infringement comes down to evidence and what parts that are quantitatively or qualitatively significant in the original programme. Cur-rently, there is no registry of copyrighted works in Sweden and so there is not telling who made the programme first if the work happens to spread. The creators of soft-ware have expressed concern and allegedly lobbied for a new directive giving more protection to the original creators. The culmination of the lobby work was the Soft-ware Patent Directive, which proposed that software should be seen as an invention and therefore eligible for patenting. However, there were many reasons as to why software should not be patented, most notably increased cost and the years of wait-ing for the patent grant, and the directive was rejected. Still, the concerns persisted and no greater protection has been given to the creators of software.
450

Ensemble of Feature Selection Techniques for High Dimensional Data

Vege, Sri Harsha 01 May 2012 (has links)
Data mining involves the use of data analysis tools to discover previously unknown, valid patterns and relationships from large amounts of data stored in databases, data warehouses, or other information repositories. Feature selection is an important preprocessing step of data mining that helps increase the predictive performance of a model. The main aim of feature selection is to choose a subset of features with high predictive information and eliminate irrelevant features with little or no predictive information. Using a single feature selection technique may generate local optima. In this thesis we propose an ensemble approach for feature selection, where multiple feature selection techniques are combined to yield more robust and stable results. Ensemble of multiple feature ranking techniques is performed in two steps. The first step involves creating a set of different feature selectors, each providing its sorted order of features, while the second step aggregates the results of all feature ranking techniques. The ensemble method used in our study is frequency count which is accompanied by mean to resolve any frequency count collision. Experiments conducted in this work are performed on the datasets collected from Kent Ridge bio-medical data repository. Lung Cancer dataset and Lymphoma dataset are selected from the repository to perform experiments. Lung Cancer dataset consists of 57 attributes and 32 instances and Lymphoma dataset consists of 4027 attributes and 96 ix instances. Experiments are performed on the reduced datasets obtained from feature ranking. These datasets are used to build the classification models. Model performance is evaluated in terms of AUC (Area under Receiver Operating Characteristic Curve) performance metric. ANOVA tests are also performed on the AUC performance metric. Experimental results suggest that ensemble of multiple feature selection techniques is more effective than an individual feature selection technique.

Page generated in 0.2155 seconds