• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 557
  • 231
  • 139
  • 127
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1548
  • 408
  • 263
  • 240
  • 233
  • 231
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 127
  • 120
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

CONFIGURATION BIT STREAM GENERATION FOR THE MT-FPGA & ARCHITECTURAL ENHANCEMENTS FOR ARITHMETIC IMPLEMENTATIONS

SAPRE, VISHAL 13 July 2005 (has links)
No description available.
322

AN APPROACH TOWARDS HDL MODEL GENERATION FOR THE MULTI-TECHNOLOGY FIELD PROGRAMMABLE GATE ARRAY

RAMASWAMY, EASWAR SINGANELLORE 03 April 2006 (has links)
No description available.
323

A Management Paradigm for FPGA Design Flow Acceleration

Tavaragiri, Abhay 21 July 2011 (has links)
Advances in FPGA density and complexity have not been matched by a corresponding improvement in the performance of the implementation tools. Knowledge of incremental changes in a design can lead to fast turnaround times for implementing even large designs. A high-level overview of an incremental productivity flow, focusing on the back-end FPGA design is provided in this thesis. This thesis presents a management paradigm that is used to capture the design specific information in a format that is reusable across the entire design process. A C++ based internal data structure stores all the information, whereas XML is used to provide an external view of the design data. This work provides a vendor independent, universal format for representing the logical and physical information associated with FPGA designs. / Master of Science
324

Checking Metadata Usage for Enterprise Applications

Zhang, Yaxuan 20 May 2021 (has links)
It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers. / Master of Science / It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effectiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.
325

John Cleland's The Dictionary of Love: An XML Edition

Davis, Emily Katherine 16 May 2007 (has links)
Conducting and disseminating humanities research is fast becoming a highly technological endeavor. The variety of multimedia options for presenting information changes the questions we ask and the answers we find as well as the problems we encounter and the solutions we devise. The following essays provide an account of creating a digital edition of John Cleland's The Dictionary of Love using XML. The project utilizes traditional literary research methods while working toward an untraditional digital final product, a characteristic that highlights the feedback loop between form and function. Thus, the purpose of this project is twofold: to provide students and scholars information and analysis on The Dictionary of Love and, in the process, to examine and discuss the challenges, drawbacks and benefits of producing the content as a web-compatible resource. / Master of Arts
326

Complete Vendor-Neutral Instrumentation Configuration with IHAL and TMATS XML

Hamilton, John, Darr, Timothy, Fernandes, Ronald, Sulewski, Joe, Jones, Charles 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Previously, we have presented an approach to achieving standards-based multi-vendor hardware configuration using the Instrumentation Hardware Abstraction Language (IHAL) and an associated Application Programming Interface (API) specification. In this paper, we extend this approach to include support for configuring PCM formats. This capability is an appropriate fit for IHAL since changes to hardware settings can affect the current telemetry format and vice versa. We describe extensions made to the IHAL API in order to support this capability. Additionally, we show how complete instrumentation configurations can be described using an integrated IHAL and TMATS XML. Finally, we describe a demonstration of this capability implemented for data acquisition hardware produced by L-3 Telemetry East.
327

以XML資料庫為儲存體的Java永續物件之研究 / Persistent Java Data Objects on XML Databases

侯語政, Hou, Yu-Cheng Unknown Date (has links)
物件永續化是應用系統設計時經常會面臨的需求,傳統做法是由開發人員自行設法將物件轉為資料庫可接受的格式,再存入後端資料庫。但這往往使得開發人員必須同時處理兩種資料模型,除了應用系統所用的物件模型之外,開發人員還要處理後端資料庫所用的資料模型,譬如表格等,以及兩種模型間的轉換。這不僅增加系統開發的複雜度,維護系統亦不容易。新的資料永續性技術Java資料物件(JDO)提供一個標準的框架,能夠幫助開發人員代為處理物件永續化的問題。因此開發人員能夠以單純的物件模型發展應用系統。另一方面,XML技術的興起帶動XML文件在資料交換與儲存方面的加速發展,其中專門儲存XML文件的資料庫也日益普遍。我們的研究是在瞭解如何使用XML資料庫為後端的資料儲存庫,而對Java物件進行物件永續化。 / Object persistence often comes up at the development of the application systems. Traditionally, the developers should try to transfer the objects to forms that databases can accept, and then store them in databases. But this often makes developers deal with two kinds of data models at the same time: besides object model that the application usually uses, the developers should also deal with the data model used for the backend database, like the relation model, and the conversion between both models. This not only increases the complexity of the system, but also the difficulty to maintain the system. A new technology of object persistence is Java Data Objects (JDO), which offer a standard framework to help developers to deal with object persistence so that the developers can concern themselves with object model only. On the other hand, the rise of XML technologies makes it attractive in data exchange and storage. The use of XML databases as data repositories becomes more and more common. Our research in the thesis is to realize JDO by serializing Java objects as XML documents and use XML databases as persistent repositories to store the resulting documents.
328

電子病歷彙總工具之設計與實作 / Design and Implementation of a Content Aggregator for Electronic Medical Records

林柏維, Lin, Bo Wei Unknown Date (has links)
臺灣電子病歷內容基本格式 (Taiwan Electronic Medical Record Template, TMT)是參考國際相關標準之後,專為臺灣本土需求而設計的電子病歷標準。為了進一步評估TMT的實用性與實施上可能遭遇到的問題,衛生署於民國96年推動了「建構以病人為中心之電子病歷跨院資訊交換環境案」,目前已完成參與該專案各醫院的實地測試工作。 在檢視專案執行的結果後,我們發現TMT資訊系統有三項主要的缺點:一、TMT病歷標準不易閱讀,難以撰寫從醫院醫療資訊系統彙總TMT所需資料的指令集;二、製作TMT系統所須的設定檔程序過於繁複,不但時間攏長而且容易出錯;三、線上實際產生個別病患的TMT資料的時間過長,執行效能有待提昇。有鑑於此,我們設計並實作了一套適用於TMT標準的電子病歷文件產生工具,我們重新設計了規格文件檔及輔助設定檔,並提供了Schema Processor自動化工具產生這些檔案;同時,我們也改進了病歷資料彙總程序,搭配高便利性的設定檔,病歷文件產生工具在執行效能上了有明顯的改善。 在詳細、完整的規格文件檔協助下,資訊人員能更快的了解病歷標準架構及撰寫病歷資料查詢語法,以利產出正確的電子病歷文件;透過自動化工具的輔助,簡化了設定檔的製作程序,改善了耗時且容易出錯的缺點;相較於目前的TMT系統,我們的工具執行效能提昇了80%以上,產出電子病歷文件的時間只要原來的五分之一。 / The Taiwan Electronic Medical Record Template (TMT) proposed by Taiwan Association for Medical Informatics (TAMI) aims to provide a suite of standard forms that will become the common basis for developing electronic medical record (EMR) systems in Taiwan. It is specified in the XML standard for facilitating data interchange. In order to further assess the usefulness of TMT, in 2007 the Department of Health lauched the project "Building of an Information Exchange Environment for Cross-Hospital Digital Medical Record" to put the TMT to a filed test. There are in total eleven hospitals in the project and they all successfully implemented a significant subset of TMT using their hospital information systems (HIS). / However, towards the end of the project, we have identified three major shortcomings of the content aggregator for TMT provided by the TAMI: First, as the TMT Schema is rather complex, it is very difficult for hospital IT staff to prepare the required query instructions to retrieve the data stored in the HIS database. Although there is a XML data mapping tool provided to simplify the mapping process, we found that it did not ease the mapping task as the TAMI staff had expected. Second, the configuration files for preparing a patient’s EMR are too complicated, making the implementation process not only long time but also error-prone. Third, the time required to produce a single sheet of TMT is much longer than planned. There is an urgent need to improve the performance of the content aggregator. / Therefore, we propose to re-engineer the content aggregator of TMT for retrieving the required data from the HIS database. Specifically, we redesigned the specification document files and configuration files, and provided a Schema Processor tool to generate these files in a semi-automatically manner. As a result, the IT staff of hospitals can more quickly understand the structure of TMT Schema and prepare the query instructions effectively. Finally, with the powerful configuration files, our TMT document generator runs much faster than the existing one. According to our experimental results, it enhances the performance of generating a TMT sheet more than 80 percent.
329

Duomenų perkėlimo tarp informacinių sistemų analizė ir įgyvendinimas / Analysis And Implementation Of Data Migration Between Information Systems

Urbanavičius, Tomas 28 June 2010 (has links)
Baigiamajame magistro darbe yra tiriamas duomenų perkėlimas tarp informacinių sistemų. Šio darbo tikslas – perkelti kliento duomenis iš senos informacinės sistemos į naują. Darbe pateikiami dažniausiai praktikoje naudojami duomenų perkėlimo būdai bei jų privalumai ir trūkumai. Išsamiai pateikti duomenų perkėlimo etapai ir kaip jie turi būti taikomi duomenų perkėlimo procese. Praktinėje dalyje pristatytas duomenų perkėlimo įrankis, sukurtas įmonėje UAB „Exigen Services Vilnius“. Programa sukurta seniems draudimo polisų duomenims perkelti naudojant XML rinkmeną. Kūrimo proceso metu buvo sudaryta duomenų perkėlimo specifikacija, pasirinkta duomenų perkėlimo metodika ir sukurtas duomenų perkėlimo įrankis. Darbą surado 5 dalys: duomenų perkėlimo metodika, duomenų perkėlimo etapai ir problemos, duomenų perkėlimo procesas, duomenų perkėlimo įgyvendinimas, duomenų perkėlimo specifikacija. Darbo apimtis 63 puslapiai teksto be priedų, 25 paveikslai, 2 lentelės ir 11 bibliografinių šaltinių. / A research of master‘s final work is data migration between information systems. The aim of this work is to mirgate client data from old information system to new one. Data migration methodologies in this work are thoroughly analyzed. Most popular data migration methodologies are analyzed, by introducing their advantages and drawbacks and how they should be applied in data migration process. Data migration tool example is also analyzed in this work. Software tool, that was implemented in UAB „Exigen Services Lietuva“ organization, is analized in this master`s work. This data loading tool is using XML for information transfer between export tool and data loader. Data loader also has its specification, that is also described in this job. Thesis structure consists of 5 parts: data migration methodology, data migration phases and problems, data migration process, data migration implementation, data migration tool specification. Thesis consists of 63 pages of text without extras, 25 pictures, 2 table and 11 bibliographical entries.
330

Making the stones speak

Roueché, Charlotte 17 March 2017 (has links) (PDF)
No description available.

Page generated in 0.1032 seconds