• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 145
  • 53
  • 38
  • 32
  • 14
  • 9
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 597
  • 135
  • 122
  • 116
  • 103
  • 80
  • 80
  • 77
  • 76
  • 74
  • 72
  • 70
  • 59
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Interoperable information model of a pneumatic handling system for plug-and-produce

Alt, Raphael, Wintersohle, Peter, Schweizer, Hartmut, Wollschläger, Martin, Schmitz, Katharina 25 June 2020 (has links)
Commissioning of a machine is still representing a very challenging operation and most steps are still executed manually by commissioning engineers. A future goal is to support the commissioning engineers and further automate the entire integration process of a newly installed system with a minimum of manual effort. This use case is known as plug-and-produce (PnP). In this contribution a concept of the Industrial Internet of Things is presented to improve the commissioning task for a pneumatic handling system. The system is based on a service-oriented architecture. Within this context, information models are developed to meet the requirements of PnP to provide relevant information via virtual representations, e.g. the asset administration shell, of the components to the commissioning process. Finally, a draft of the entire PnP process is shown, providing a general understanding of Industrial Internet of Things fluid power systems.
552

Implementation and Evaluation of the Canonical Text Service Protocol as Part of a Research Infrastructure in the Digital Humanities

Tiepmar, Jochen 23 May 2018 (has links)
Einer der bestimmenden Faktoren moderner Gesellschaften ist die fortlaufende Digitalisierung von Informationen und Resourcen. Dieser Trend spiegelt sich in heutiger Forschung wider und hat starken Einfluss auf akademische und industrielle Projekte. Es ist nahezu unmöglich, ein modernes Projekt aufzusetzen, welches keinerlei digitale Aspekte beinhaltet und viele Projekte werden mit dem alleinigen Zweck der Digitalisierung eines Teils der Welt ins Leben gerufen. Dieser Trend führt zur Entstehung neuer Forschungsfelder an den Schnittstellen zwischen der analogen Welt -- beispielsweise den Geisteswissenschaften -- und der Digitalen -- beispielsweise der Informatik. Eine davon ist das für diese Arbeit interessante Gebiet der Digital Humanities. Dabei werden komplexe Forschungsfragen, -techniken und -prinzipien verbunden, die sich unabhängig voneinander entwickelten. Viel Mühe ist nötig, um die Kommunikation zwischen deren Konzepte zu definieren um Missverständnisse und Fehleinschätzungen zu vermeiden. Dieser Prozess der Brückenbildung ist eine zentrale Aufgabe der neu entstehenden Forschungsfelder. Diese Arbeit schlägt eine solche Brücke für die textorientierten Digital Humanities vor. Diese Lösung basiert auf einem Referenzsystem für digitalen Text, welches in den Geisteswissenschaften spezifiziert und im Rahmen dieser Arbeit zu einem Datenkommunikationsprotokoll für die Informatik uminterpretiert wurde: dem Canonical Text Service (CTS) Protokoll. / One of the defining factors of modern societies is the ongoing digitization of information, resources and in many ways even life itself. This trend is obviously also reflected in today's research environments and heavily influences the direction in which academic and industrial projects are headed. It is borderline impossible to set up a modern project without including digital aspects and many projects are even set up for the sole purpose of digitizing a specific part of the world. One of the side effects of this trend is the emergence of new research fields at the intersection points between the analog world -- represented for example by the humanities -- and the digital world -- represented for example by computer science. One set of such research fields are the digital humanities, the area of interest for this work. In the process of this development, complex research questions, techniques, and principles are aligned next to each other that were developed independently from another. A lot of work has to go into defining communication between the concepts to prevent misunderstandings and misconceptions on both sides. This bridge building process is one of the major tasks that must be done by the newly developed research fields. This work proposes such a bridge for the text-oriented digital humanities based on a digital text reference system that was previously developed in the humanities and is in this work reinterpreted as a data communication protocol for computer science: The Canonical Text Service (CTS) protocol.
553

Semantic Driven Approach for Rapid Application Development in Industrial Internet of Things

Thuluva, Aparna Saisree 13 May 2022 (has links)
The evolution of IoT has revolutionized industrial automation. Industrial devices at every level such as field devices, control devices, enterprise level devices etc., are connected to the Internet, where they can be accessed easily. It has significantly changed the way applications are developed on the industrial automation systems. It led to the paradigm shift where novel IoT application development tools such as Node-RED can be used to develop complex industrial applications as IoT orchestrations. However, in the current state, these applications are bound strictly to devices from specific vendors and ecosystems. They cannot be re-used with devices from other vendors and platforms, since the applications are not semantically interoperable. For this purpose, it is desirable to use platform-independent, vendor-neutral application templates for common automation tasks. However, in the current state in Node-RED such reusable and interoperable application templates cannot be developed. The interoperability problem at the data level can be addressed in IoT, using Semantic Web (SW) technologies. However, for an industrial engineer or an IoT application developer, SW technologies are not very easy to use. In order to enable efficient use of SW technologies to create interoperable IoT applications, novel IoT tools are required. For this purpose, in this paper we propose a novel semantic extension to the widely used Node-RED tool by introducing semantic definitions such as iot.schema.org semantic models into Node-RED. The tool guides a non-expert in semantic technologies such as a device vendor, a machine builder to configure the semantics of a device consistently. Moreover, it also enables an engineer, IoT application developer to design and develop semantically interoperable IoT applications with minimal effort. Our approach accelerates the application development process by introducing novel semantic application templates called Recipes in Node-RED. Using Recipes, complex application development tasks such as skill matching between Recipes and existing things can be automated.We will present the approach to perform automated skill matching on the Cloud or on the Edge of an automation system. We performed quantitative and qualitative evaluation of our approach to test the feasibility and scalability of the approach in real world scenarios. The results of the evaluation are presented and discussed in the paper. / Die Entwicklung des Internet der Dinge (IoT) hat die industrielle Automatisierung revolutioniert. Industrielle Geräte auf allen Ebenen wie Feldgeräte, Steuergeräte, Geräte auf Unternehmensebene usw. sind mit dem Internet verbunden, wodurch problemlos auf sie zugegriffen werden kann. Es hat die Art und Weise, wie Anwendungen auf industriellen Automatisierungssystemen entwickelt werden, deutlich verändert. Es führte zum Paradigmenwechsel, wo neuartige IoT Anwendungsentwicklungstools, wie Node-RED, verwendet werden können, um komplexe industrielle Anwendungen als IoT-Orchestrierungen zu entwickeln. Aktuell sind diese Anwendungen jedoch ausschließlich an Geräte bestimmter Anbieter und Ökosysteme gebunden. Sie können nicht mit Geräten anderer Anbieter und Plattformen verbunden werden, da die Anwendungen nicht semantisch interoperabel sind. Daher ist es wünschenswert, plattformunabhängige, herstellerneutrale Anwendungsvorlagen für allgemeine Automatisierungsaufgaben zu verwenden. Im aktuellen Status von Node-RED können solche wiederverwendbaren und interoperablen Anwendungsvorlagen jedoch nicht entwickelt werden. Diese Interoperabilitätsprobleme auf Datenebene können im IoT mithilfe von Semantic Web (SW) -Technologien behoben werden. Für Ingenieure oder Entwickler von IoT-Anwendungen sind SW-Technologien nicht sehr einfach zu verwenden. Zur Erstellung interoperabler IoT-Anwendungen sind daher neuartige IoT-Tools erforderlich. Zu diesem Zweck schlagen wir eine neuartige semantische Erweiterung des weit verbreiteten Node-RED-Tools vor, indem wir semantische Definitionen wie iot.schema.org in die semantischen Modelle von NODE-Red einführen. Das Tool leitet einen Gerätehersteller oder Maschinebauer, die keine Experten in semantische Technologien sind, an um die Semantik eines Geräts konsistent zu konfigurieren. Darüber hinaus ermöglicht es auch einem Ingenieur oder IoT-Anwendungsentwickler, semantische, interoperable IoT-Anwendungen mit minimalem Aufwand zu entwerfen und entwicklen Unser Ansatz beschleunigt die Anwendungsentwicklungsprozesse durch Einführung neuartiger semantischer Anwendungsvorlagen namens Rezepte für Node-RED. Durch die Verwendung von Rezepten können komplexe Anwendungsentwicklungsaufgaben wie das Abgleichen von Funktionen zwischen Rezepten und vorhandenen Strukturen automatisiert werden. Wir demonstrieren Skill-Matching in der Cloud oder am Industrial Edge eines Automatisierungssystems. Wir haben dafür quantitative und qualitative Bewertung unseres Ansatzes durchgeführt, um die Machbarkeit und Skalierbarkeit des Ansatzes in realen Szenarien zu testen. Die Ergebnisse der Bewertung werden in dieser Arbeit vorgestellt und diskutiert.
554

Enhancing interoperability for IoT based smart manufacturing : An analytical study of interoperability issues and case study

Wang, Yujue January 2020 (has links)
In the era of Industry 4.0, the Internet-of-Things (IoT) plays the driving role comparable to steam power in the first industrial revolution. IoT provides the potential to combine machine-to-machine (M2M) interaction and real time data collection within the field of manufacturing. Therefore, the adoption of IoT in industry enhances dynamic optimization, control and data-driven decision making. However, the domain suffers due to interoperability issues, with massive numbers of IoT devices connecting to the internet despite the absence of communication standards upon. Heterogeneity is pervasive in IoT ranging from the low levels (device connectivity, network connectivity, communication protocols) to high levels (services, applications, and platforms). The project investigates the current state of industrial IoT (IIoT) ecosystem, to draw a comprehensive understanding on interoperability challenges and current solutions in supporting of IoT-based smart manufacturing. Based upon a literature review, IIoT interoperability issues were classified into four levels: technical, syntactical, semantic, and organizational level interoperability. Regarding each level of interoperability, the current solutions that addressing interoperability were grouped and analyzed. Nine reference architectures were compared in the context of supporting industrial interoperability. Based on the analysis, interoperability research trends and challenges were identified. FIWARE Generic Enablers (FIWARE GEs) were identified as a possible solution in supporting interoperability for manufacturing applications. FIWARE GEs were evaluated with a scenario-based Method for Evaluating Middleware Architectures (MEMS).  Nine key scenarios were identified in order to evaluate the interoperability attribute of FIWARE GEs. A smart manufacturing use case was prototyped and a test bed adopting FIWARE Orion Context Broker as its main component was designed. The evaluation shows that FIWARE GEs meet eight out of nine key scenarios’ requirements. These results show that FIWARE GEs have the ability to enhance industrial IoT interoperability for a smart manufacturing use case. The overall performance of FIWARE GEs was also evaluated from the perspectives of CPU usage, network traffic, and request execution time. Different request loads were simulated and tested in our testbed. The results show an acceptable performance in terms with a maximum CPU usage (on a Macbook Pro (2018) with a 2.3 GHz Intel Core i5 processor) of less than 25% with a load of 1000 devices, and an average execution time of less than 5 seconds for 500 devices to publish their measurements under the prototyped implementation. / I en tid präglad av Industry 4.0, Internet-of-things (IoT) spelar drivande roll jämförbar med ångkraft i den första industriella revolutionen. IoT ger potentialen att kombinera maskin-till-maskin (M2M) -interaktion och realtidsdatainsamling inom tillverkningsområdet. Därför förbättrar antagandet av IoT i branschen dynamisk optimering, kontroll och datadriven beslutsfattande. Domänen lider dock på grund av interoperabilitetsproblem, med enorma antal IoT-enheter som ansluter till internet trots avsaknaden av kommunikationsstandarder på. Heterogenitet är genomgripande i IoT som sträcker sig från de låga nivåerna (enhetskonnektivitet, nätverksanslutning, kommunikationsprotokoll) till höga nivåer (tjänster, applikationer och plattformar). Projektet undersöker det nuvarande tillståndet för det industriella IoT (IIoT) ekosystemet, för att få en omfattande förståelse för interoperabilitetsutmaningar och aktuella lösningar för att stödja IoT-baserad smart tillverkning. Baserat på en litteraturöversikt klassificerades IIoT-interoperabilitetsfrågor i fyra nivåer: teknisk, syntaktisk, semantisk och organisatorisk nivå interoperabilitet. När det gäller varje nivå av driftskompatibilitet grupperades och analyserades de nuvarande lösningarna för adressering av interoperabilitet. Nio referensarkitekturer jämfördes i samband med att stödja industriell driftskompatibilitet. Baserat på analysen identifierades interoperabilitetstrender och utmaningar. FIWARE Generic Enablers (FIWARE GEs) identifierades som en möjlig lösning för att stödja interoperabilitet för tillverkningstillämpningar. FIWARE GEs utvärderades med en scenariebaserad metod för utvärdering av Middleware Architectures (MEMS). Nio nyckelscenarier identifierades för att utvärdera interoperabilitetsattributet för FIWARE GEs. Ett smart tillverkningsfodral tillverkades med prototyper och en testbädd som antog FIWARE Orion Context Broker som huvudkomponent designades. Utvärderingen visar att FIWARE GE uppfyller åtta av nio krav på nyckelscenarier. Dessa resultat visar att FIWARE GE har förmågan att förbättra industriell IoT-interoperabilitet för ett smart tillverkningsfodral. FIWARE GEs totala prestanda utvärderades också utifrån perspektivet för CPU-användning, nätverkstrafik och begär exekveringstid. Olika förfrågningsbelastningar simulerades och testades i vår testbädd. Resultaten visar en acceptabel prestanda i termer av en maximal CPU-användning (på en Macbook Pro (2018) med en 2,3 GHz Intel Core i5-processor) på mindre än 25% med en belastning på 1000 enheter och en genomsnittlig körningstid på mindre än 5 sekunder för 500 enheter att publicera sina mätningar under den prototyperna implementateringen.
555

An Investigation of Semantic Interoperability with EHR systems for Precision Dosing / En undersökning av semantisk interoperabilitet med EHR-system för precisionsdosering

Mukwaya, Jovia Namugerwa January 2020 (has links)
In healthcare, vulnerable populations that are using medications with a narrow therapeutic index and wide interpatient PK/PD (pharmacokinetic/pharmacodynamic modelling) variability are increasing. As such, variable dosage regimens may result in severe therapeutic failures or adverse drug reactions (ADR). Improved monitoring of patient response to medication and personalization of treatment is therefore warranted. Precision dosing aims to individualize drug regimens for each patient based on independent factors obtained from a patient’s clinical records. Personalization of dosing increases the accuracy and efficiency of medication delivery. This can be achieved through utilizing the wide range of Electronic Health Records (EHR) contain the patients’ medical history, diagnoses, laboratory test results, demographics, treatment plans, biomarker data; information that can be exploited to generate a patient-specific treatment regimen. For example, Fast Healthcare Interoperability Resources (FHIR) is an existing healthcare standard that provides a framework on which semantic exchange of meaningful clinical information can be developed such as using an ontology as a decision support tool to achieve precision medicine. The purpose of this thesis is to make an investigation of the feasibility of interoperability in EHR and propose an ontology framework for precision dosing using currently existing health standards. The methodology involved carrying out of semi-structured interviews from professionals in relevant areas of expertise and document analysis of already existent literature, a precision dosing ontology framework is developed. Results show key tenants for an ontology framework and drugs and their covariates. The thesis therefore advances to investigate how data requirements in EHR systems, IT platforms, implementation, and integration of Model Imposed Precision Dosing (MIPD) and recommendations have been evaluated to cater to interoperability. With modern healthcare striving for personalized healthcare, precision medicine would offer an improved therapeutic experience for a patient.
556

More tools for Canvas : Realizing a Digital Form with Dynamically Presented Questions and Alternatives

Sarwar, Reshad, Manzi, Nathan January 2019 (has links)
At KTH, students who want to start their degree project must complete a paper form called “UT-EXAR: Ansökan om examensarbete/application for degree project”. The form is used to determine students’ eligibility to start a degree project, as well as potential examiners for the project. After the form is filled in and signed by multiple parties, a student can initiate his or her degree project. However, due to the excessively time-consuming process of completing the form, an alternative solution was proposed: a survey in the Canvas Learning Management System (LMS) that replace s the UT-EXAR form. Although the survey reduces the time required by students to provide information and find examiners, it is by no means the most efficient solution. The survey suffers from multiple flaws, such as asking students to answer unnecessary questions, and for certain questions, presenting students with more alternatives than necessary. The survey also fails to automatically organize the data collected from the students’ answers; hence administrators must manually enter the data into a spreadsheet or other record. This thesis proposes an optimized solution to the problem by introducing a dynamic survey. Moreover, this dynamic survey uses the Canvas Representational State Transfer (REST) API to access students’ program-specific data. Additionally, this survey can use data provided by students when answering the survey questions to dynamically construct questions for each individual student as well as using information from other KTH systems to dynamically construct customized alternatives for each individual student. This solution effectively prevents the survey from presenting students with questions and choices that are irrelevant to their individual case. Furthermore, the proposed solution directly inserts the data collected from the students into a Canvas Gradebook. In order to implement and test the proposed solution, a version of the Canvas LMS was created by virtualizing each Canvas-based microservice inside of a Docker container and allowing the containers to communicate over a network. Furthermore, the survey itself used the Learning Tools Interoperability (LTI) standard. When testing the solution, it was seen that the survey has not only successfully managed to filter the questions and alternative answers based on the user’s data, but also showed great potential to be more efficient than a survey with statically-presented data. The survey effectively automates the insertion of the data into the gradebook. / På KTH, studenter som skall påbörja sitt examensarbete måste fylla i en blankett som kallas “UT-EXAR: Ansökan om examensarbete/application for degree project”. Blanketten används för att bestämma studenters behörighet för att göra examensarbete, samt potentiella examinator för projektet. Efter att blanketten är fylld och undertecknad av flera parter kan en student påbörja sitt examensarbete. Emellertid, på grund av den alltför tidskrävande processen med att fylla blanketten, var en alternativ lösning föreslås: en särskild undersökning i Canvas Lärplattform (eng. Learning Management System(LMS)) som fungerar som ersättare för UT-EXAR-formulär. Trots att undersökningen har lyckats minska den tid som krävs av studetenter för att ge information och hitta examinator, det är inte den mest effektiva lösningen. Undersökningen lider av flera brister, såsom att få studenterna att svara på fler frågor än vad som behövs, och för vissa frågor, presenterar studenter med fler svarsalternativ än nödvändigt. Undersökningen inte heller automatiskt med att organisera data som samlats in från studenters svar. Som ett resultat skulle en administratör behöva organisera data manuellt i ett kalkylblad. Detta examensarbete föreslår en mer optimerad lösning på problemet: omskrivning av undersökningens funktionaliteter för att använda Representational State Transfer(REST) API för att komma åt studenters programspecifika data i back-end, såväl att använda speciella haschar för att hålla referenser till uppgifter som lämnas av studenterna när de svarar på frågorna i undersökningen, så att undersökningen inte bara kan använda dessa data för att dynamiskt konstruera frågor för varje enskild student, men också dynamiskt konstruera svarsalternativ för varje enskild student. Denna lösning förhindrar effektivt undersökningen från att presentera studenter med frågor och valbara svarsalternativ som är helt irrelevanta för var och en av deras individuella fall. Med den föreslagna lösningen kommer undersökningen dessutom att kunna organisera de data som samlats in från Studenterna till ett speciellt Canvas-baserat kalkyllblad, kallas som Betygsbok. För att genomföra och testa den förslagna lösningen skapades en testbar version av Canvas LMS genom att virtualisera varje Canvas-baserad mikroservice inuti en dockercontainer och tillåter containers att kommunicera över ett nätverk. Dessutom var undersökningen själv konfigurerad för att använda Lärverktyg Interoperability (LTI) standard. Vid testning av lösningen, det visade sig att undersökningen på ett sätt effektivt har lyckats använda vissa uppgifter från en testanvändare att bara endast svara på de relevanta frågorna, men också presentera användaren med en mer kondenserad lista svarsalternativ över baserat på data.<p>
557

Die Generation Alpha der Digital Health Innovationen – Eine Fallstudie aus der Multiple Sklerose Versorgung

Schlieter, Hannes, Susky, Marcel, Richter, Peggy, Hickmann, Emily, Scheplitz, Tim, Burwitz, Martin, Ziemssen, Tjalf 01 March 2024 (has links)
Die digitale Transformation im Gesundheitswesen ermöglicht durch die Entwicklung zahlreicher neuer Technologien und Standards eine zunehmend individualisierte, bedarfsgerechte und berufsgruppenübergreifende Versorgung von Patienten. Diese neue Generation von Digital Health Innovationen – die Digital Health Generation Alpha (in Anlehnung an die korrespondierende Alterskohorte) – erfüllt Informations‑, Kommunikations- und Interoperabilitätsanforderungen entlang des gesamten Versorgungsprozesses, die aufgrund von abgegrenzten Leistungs- und Zuständigkeitsbereichen sowie Vergütungsregelungen oft eine unüberwindbare Hürde dargestellt haben. Im Beitrag werden mit der i) Pfadorientierung, ii) Patientenorientierung und -einbeziehung, iii) Qualitätsorientierung und iv) Integrationsfähigkeit vier zentrale Gestaltungsdimensionen von Digital Health Innovationen der Generation Alpha vorgestellt. Diese werden literaturgestützt aufgearbeitet und deren praktische Umsetzung anhand einer Fallstudie im Bereich der Versorgung von Patienten mit Multipler Sklerose aufgezeigt. Zentrale Leitfragen, konkrete Umsetzungsmaßnahmen und literaturgestützte Gestaltungsziele werden anhand eines prototypischen Vorgehensmodells beschrieben. Anhand der Fallstudie werden anschließend Implikationen für die zukünftige Digital Health Agenda abgeleitet, welche insbesondere für die Realisierung innovativer Werteversprechen und deren Integration in komplexe Zielumgebungen des Gesundheitswesens notwendig sind. / Through the development of numerous new technologies and standards the digital health care transformation progressively enables an individualized, need-based, and interprofessional treatment provision. The new generation of digital health innovations—in analogy to the age cohorts, the digital health generation alpha—can therefore fulfill information, communication, and interoperability tasks along the entire care process, that previously often represented an insurmountable challenge. The four design dimensions of digital health generation alpha are introduced: i) pathway orientation, ii) patient orientation and involvement, iii) quality orientation, and iv) integration capability. These are first discussed on basis of current literature and their practical utilization is then demonstrated in a case study on patients with multiple sclerosis. A process model for pathway development is presented that incorporates the four design dimensions and transforms a treatment process into a digital, participative, and integrated care model. Central guiding questions, concrete implementation measures and literature-based design goals are described based on the process model. Finally, implications for the future digital health agenda are derived, which are particularly necessary for the realization of innovative value propositions and their integration into complex healthcare target environments.
558

Investigation of Integrated Decoupling Methods for MIMO Antenna Systems. Design, Modelling and Implementation of MIMO Antenna Systems for Different Spectrum Applications with High Port-to-Port Isolation Using Different Decoupling Techniques

Salah, Adham M.S. January 2019 (has links)
Multiple-Input-Multiple-Output (MIMO) antenna technology refers to an antenna with multiple radiators at both transmitter and receiver ends. It is designed to increase the data rate in wireless communication systems by achieving multiple channels occupying the same bandwidth in a multipath environment. The main drawback associated with this technology is the coupling between the radiating elements. A MIMO antenna system merely acts as an antenna array if the coupling between the radiating elements is high. For this reason, strong decoupling between the radiating elements should be achieved, in order to utilize the benefits of MIMO technology. The main objectives of this thesis are to investigate and implement several printed MIMO antenna geometries with integrated decoupling approaches for WLAN, WiMAX, and 5G applications. The characteristics of MIMO antenna performance have been reported in terms of scattering parameters, envelope correlation coefficient (ECC), total active reflection coefficient (TARC), channel capacity loss (CCL), diversity gain (DG), antenna efficiency, antenna peak gain and antenna radiation patterns. Three new 2×2 MIMO array antennas are proposed, covering dual and multiple spectrum bandwidths for WLAN (2.4/5.2/5.8 GHz) and WiMAX (3.5 GHz) applications. These designs employ a combination of DGS and neutralization line methods to reduce the coupling caused by the surface current in the ground plane and between the radiating antenna elements. The minimum achieved isolation between the MIMO antennas is found to be better than 15 dB and in some bands exceeds 30 dB. The matching impedance is improved and the correlation coefficient values achieved for all three antennas are very low. In addition, the diversity gains over all spectrum bands are very close to the ideal value (DG = 10 dB). The forth proposed MIMO antenna is a compact dual-band MIMO antenna operating at WLAN bands (2.4/5.2/5.8 GHz). The antenna structure consists of two concentric double square rings radiating elements printed symmetrically. A new method is applied which combines the defected ground structure (DGS) decoupling method with five parasitic elements to reduce the coupling between the radiating antennas in the two required bands. A metamaterial-based isolation enhancement structure is investigated in the fifth proposed MIMO antenna design. This MIMO antenna consists of two dual-band arc-shaped radiating elements working in WLAN and Sub-6 GHz 5th generation (5G) bands. The antenna placement and orientation decoupling method is applied to improve the isolation in the second band while four split-ring resonators (SRRs) are added between the radiating elements to enhance the isolation in the first band. All the designs presented in this thesis have been fabricated and measured, with the simulated and measured results agreeing well in most cases. / Higher Committee for Education Development in Iraq (HCED)
559

Developing a Semantic Framework for Healthcare Information Interoperability

AYDAR, MEHMET 30 November 2015 (has links)
No description available.
560

A FRAMEWORK FOR IMPROVED DATA FLOW AND INTEROPERABILITY THROUGH DATA STRUCTURES, AGRICULTURAL SYSTEM MODELS, AND DECISION SUPPORT TOOLS

Samuel A Noel (13171302) 28 July 2022 (has links)
<p>The agricultural data landscape is largely dysfunctional because of the industry’s highvariability  in  scale,  scope,  technological  adoption,  and  relationships.   Integrated  data  andmodels of agricultural sub-systems could be used to advance decision-making, but interoperability  challenges  prevent  successful  innovation.   In  this  work,  temporal  and  geospatial indexing  strategies  and  aggregation  were  explored  toward  the  development  of  functional data  structures  for  soils,  weather,  solar,  and  machinery-collected  yield  data  that  enhance data context, scalability, and sharability.</p> <p>The data structures were then employed in the creation of decision support tools including web-based  applications  and  visualizations.   One  such  tool  leveraged  a  geospatial  indexing technique called geohashing to visualize dense yield data and measure the outcomes of on-farm yield trials.  Additionally, the proposed scalable, open-standard data structures were used to drive a soil water balance model that can provide insights into soil moisture conditions critical to farm planning, logistics, and irrigation.  The model integrates SSURGO soil data,weather data from the Applied Climate Information System, and solar data from the National Solar Radiation Database in order to compute a soil water balance, returning values including runoff, evaporation, and soil moisture in an automated, continuous, and incremental manner.</p> <p>The approach leveraged the Open Ag Data Alliance framework to demonstrate how the data structures can be delivered through sharable Representational State Transfer Application Programming Interfaces and to run the model in a service-oriented manner such that it can be operated continuously and incrementally, which is essential for driving real-time decision support tools.  The implementations rely heavily on the Javascript Object Notation data schemas leveraged by Javascript/Typescript front-end web applications and back-end services delivered through Docker containers.  The approach embraces modular coding concepts and several levels of open source utility packages were published for interacting with data sources and supporting the service-based operations.</p> <p>By making use of the strategies laid out by this framework, industry and research canenhance data-based decision making through models and tools.  Developers and researchers will  be  better  equipped  to  take  on  the  data  wrangling  tasks  involved  in  retrieving  and parsing unfamiliar datasets, moving them throughout information technology systems, and understanding those datasets down to a semantic level.</p>

Page generated in 0.0558 seconds