• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 3
  • 1
  • Tagged with
  • 92
  • 92
  • 92
  • 34
  • 24
  • 24
  • 23
  • 23
  • 20
  • 17
  • 17
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Emprego de computadores em elucidação estrutural de alcalóides / Use of computers in structural elucidation of alkaloids

Alessandra Rodrigues Rufino 12 May 2005 (has links)
O Sistema Especialista SISTEMAT foi construído com o objetivo de auxiliar os pesquisadores da área de produtos naturais na tarefa de determinação estrutural, estendendo-se também ao químico orgânico sintético. Seus programas aplicativos fornecem propostas de esqueletos fazendo uso dos dados de diversas técnicas espectrométricas, sendo que a espectrometria de ressonância magnética nuclear de 13C tem um papel de destaque entre as demais. Este trabalho descreve a utilização do SISTEMAT como uma ferramenta auxiliar na determinação estrutural de substâncias pertencentes às subclasses dos alcalóides quinolínicos, quinolizidínicos, aporfínicos, benzilisoquinolínicos, isoquinolínicos, pirrolizidínicos, acridônicos e indólicos. Para a realização deste trabalho foi construído um banco de dados contendo 1182 alcalóides, sendo todos coletados da literatura. Nestes 1182 alcalóides, estão presentes 1156 espectros de RMN 13C, 354 espectros de RMN 1H, 320 espectros de massas e as substâncias de origem vegetal estão distribuídos em 49 Famílias, 164 Gêneros e 260 Espécies. Os testes realizados forneceram bons percentuais de acertos para o reconhecimento de esqueletos. Outro programa utilizado neste trabalho foi o de redes neurais artificiais. As redes foram treinadas para auxiliar na determinação estrutural dos alcalóides aporfínicos, fornecendo a probabilidade de uma determinada substância pertencer ao esqueleto pesquisado. Para utilização das redes neurais foi construída uma planilha com os deslocamentos químicos de RMN 13C, de 165 alcalóides aporfínicos, pertencentes a 12 esqueletos diferentes. A rede forneceu ótimos resultados, classificando os esqueletos com alto grau de confiabilidade. / The Expert System SISTEMAT was built with the objective of aiding the researchers of the area of natural products in the task of structural determination, also extending to the synthetic organic chemist. Their applications programs supply proposed of skeletons making use of the data of several techniques spectrometrics, and the 13C NMR has a main paper among the others. This work describes the use of SISTEMAT as an auxiliary tool in the structural determination of substances belonging to the underclass of the alkaloids quinoline, quinolizidine, aporphine, benzylisoquinoline, isoquinoline, pyrrolizidine, acridone and indoles. For the accomplishment of this work a database was built containing 1182 alkaloids, being all collected of the literature. In these 1182 alkaloids, are present 1156 spectra of 13C NMR, 354 spectra of RMN 1:00, 320 spectra of masses and the substances of botanical origin are distributed in 49 Families, 164 Genders and 260 Species. They were accomplished around 100 tests, of which 30 are presented in this thesis. These tests supplied good percentile of the successes for the recognition of skeletons. Another program used in this work the one of nets artificial neurais, in which the nets were trained to aid in the structural determination of the aporphine alkaloids was, supplying the probability of a certain substance to belong to the researched skeleton. For use of the nets neurais a spreadsheet was built with the chemical displacements of 13C NMR, of 165 aporphine alkaloids, belonging to 12 different skeletons. The net supplied great results, classifying the skeletons with high reliability degree.
82

Chemnitzer Informatik-Berichte / Chemnitz Computer Science Reports

29 August 2017 (has links)
Die Informatik ist von besonderer Bedeutung für die Gestaltung unser alltäglichen Lebensumstände und ist eine Schlüsseltechnologie des 21. Jahrhunderts. Die Fakultät für Informatik vertritt dieses Fachgebiet umfassend und kompetent mit anwendungsorientierten Schwerpunktsetzungen. In unseren Forschungsschwerpunkten - Eingebettete selbstorganisierende Systeme - Intelligente multimediale Systeme - Parallele verteilte Systeme bieten wir international wettbewerbsfähige Forschung und Entwicklung zu aktuellen Problemstellungen. Unsere Lehre basiert auf dem Leitmotiv der beständigen Erneuerung aus der Forschung. Hieraus abgeleitet bieten wir zeitgemäße Bachelor- und Masterstudiengänge mit hervorragenden Studienbedingungen. Die Fakultät hat den Anspruch eines möglichst persönlichen Umgangs zwischen Lehrkörper und Studenten. Mit der Schriftenreihe „Chemnitzer Informatik Berichte“ geben wir Einblicke in die Forschungspraxis der Fakultät. Dabei werden unterschiedliche Forschungsthemen aus den drei Forschungsschwerpunkten und allen Professuren der Fakultät vorgestellt. / Computer science, as a key technology of the 21th century, has an exceptional impact on our everyday life and living standards. The Faculty of Computer Science represents this scientific field in a comprehensive and proficient manner with an application-orientated choice of topics. In the fields of - Embedded and self-organizing systems - Intelligent multimedia systems - Parallel and distributed systems we offer research and development for current problems and challenges on an internationally competitive level. The guiding principle of our education is the continuous innovation through advances in research. Consequently, we are able to provide modern Bachelor and Master programs with excellent academic conditions. The faculty strives to provide a maximally personal interaction between students and staff. With the series of publications „Chemnitz Computer Science Reports“ we give insigths into the reasearch practice of the faculty. We present different subjects of research from the tree research fields and all of the professorships of the Faculty of Computer Science.
83

An Effective Framework of Autonomous Driving by Sensing Road/motion Profiles

Zheyuan Wang (11715263) 22 November 2021 (has links)
<div>With more and more videos taken from dash cams on thousands of cars, retrieving these videos and searching for important information is a daunting task. The purpose of this work is to mine some key road and vehicle motion attributes in a large-scale driving video data set for traffic analysis, sensing algorithm development and autonomous driving test benchmarks. Current sensing and control of autonomous cars based on full-view identification makes it difficult to maintain a high-frequency with a fast-moving vehicle, since computation is increasingly used to cope with driving environment changes.</div><div><br></div><div>A big challenge in video data mining is how to deal with huge amounts of data. We use a compact representation called the road profile system to visualize the road environment in long 2D images. It reduces the data from each frame of image to one line, thereby compressing the video clip to the image. This data dimensionality reduction method has several advantages: First, the data size is greatly compressed. The data is compressed from a video to an image, and each frame in the video is compressed into a line. The data size is compressed hundreds of times. While the size and dimensionality of the data has been compressed greatly, the useful information in the driving video is still completely preserved, and motion information is even better represented more intuitively. Because of the data and dimensionality reduction, the identification algorithm computational efficiency is higher than the full-view identification method, and it makes the real-time identification on road is possible. Second, the data is easier to be visualized, because the data is reduced in dimensionality, and the three-dimensional video data is compressed into two-dimensional data, the reduction is more conducive to the visualization and mutual comparison of the data. Third, continuously changing attributes are easier to show and be captured. Due to the more convenient visualization of two-dimensional data, the position, color and size of the same object within a few frames will be easier to compare and capture. At the same time, in many cases, the trouble caused by tracking and matching can be eliminated. Based on the road profile system, there are three tasks in autonomous driving are achieved using the road profile images.</div><div><br></div><div>The first application is road edge detection under different weather and appearance for road following in autonomous driving to capture the road profile image and linearity profile image in the road profile system. This work uses naturalistic driving video data mining to study the appearance of roads, which covers large-scale road data and changes. This work excavated a large number of naturalistic driving video sets to sample the light-sensitive area for color feature distribution. The effective road contour image is extracted from the long-time driving video, thereby greatly reducing the amount of video data. Then, the weather and lighting type can be identified. For each weather and lighting condition obvious features are I identified at the edge of the road to distinguish the road edge. </div><div><br></div><div>The second application is detecting vehicle interactions in driving videos via motion profile images to capture the motion profile image in the road profile system. This work uses visual actions recorded in driving videos taken by a dashboard camera to identify this interaction. The motion profile images of the video are filtered at key locations, thereby reducing the complexity of object detection, depth sensing, target tracking and motion estimation. The purpose of this reduction is for decision making of vehicle actions such as lane changing, vehicle following, and cut-in handling.</div><div><br></div><div>The third application is motion planning based on vehicle interactions and driving video. Taking note of the fact that a car travels in a straight line, we simply identify a few sample lines in the view to constantly scan the road, vehicles, and environment, generating a portion of the entire video data. Without using redundant data processing, we performed semantic segmentation to streaming road profile images. We plan the vehicle's path/motion using the smallest data set possible that contains all necessary information for driving.</div><div><br></div><div>The results are obtained efficiently, and the accuracy is acceptable. The results can be used for driving video mining, traffic analysis, driver behavior understanding, etc.</div>
84

Chemnitzer Informatik-Berichte

Hardt, Wolfram 29 August 2017 (has links)
Die Informatik ist von besonderer Bedeutung für die Gestaltung unser alltäglichen Lebensumstände und ist eine Schlüsseltechnologie des 21. Jahrhunderts. Die Fakultät für Informatik vertritt dieses Fachgebiet umfassend und kompetent mit anwendungsorientierten Schwerpunktsetzungen. In unseren Forschungsschwerpunkten - Eingebettete selbstorganisierende Systeme - Intelligente multimediale Systeme - Parallele verteilte Systeme bieten wir international wettbewerbsfähige Forschung und Entwicklung zu aktuellen Problemstellungen. Unsere Lehre basiert auf dem Leitmotiv der beständigen Erneuerung aus der Forschung. Hieraus abgeleitet bieten wir zeitgemäße Bachelor- und Masterstudiengänge mit hervorragenden Studienbedingungen. Die Fakultät hat den Anspruch eines möglichst persönlichen Umgangs zwischen Lehrkörper und Studenten. Mit der Schriftenreihe „Chemnitzer Informatik Berichte“ geben wir Einblicke in die Forschungspraxis der Fakultät. Dabei werden unterschiedliche Forschungsthemen aus den drei Forschungsschwerpunkten und allen Professuren der Fakultät vorgestellt. / Computer science, as a key technology of the 21th century, has an exceptional impact on our everyday life and living standards. The Faculty of Computer Science represents this scientific field in a comprehensive and proficient manner with an application-orientated choice of topics. In the fields of - Embedded and self-organizing systems - Intelligent multimedia systems - Parallel and distributed systems we offer research and development for current problems and challenges on an internationally competitive level. The guiding principle of our education is the continuous innovation through advances in research. Consequently, we are able to provide modern Bachelor and Master programs with excellent academic conditions. The faculty strives to provide a maximally personal interaction between students and staff. With the series of publications „Chemnitz Computer Science Reports“ we give insigths into the reasearch practice of the faculty. We present different subjects of research from the tree research fields and all of the professorships of the Faculty of Computer Science.
85

Performance Comparison of Public Bike Demand Predictions: The Impact of Weather and Air Pollution

Min Namgung (9380318) 15 December 2020 (has links)
Many metropolitan cities motivate people to exploit public bike-sharing programs as alternative transportation for many reasons. Due to its’ popularity, multiple types of research on optimizing public bike-sharing systems is conducted on city-level, neighborhood-level, station-level, or user-level to predict the public bike demand. Previously, the research on the public bike demand prediction primarily focused on discovering a relationship with weather as an external factor that possibly impacted the bike usage or analyzing the bike user trend in one aspect. This work hypothesizes two external factors that are likely to affect public bike demand: weather and air pollution. This study uses a public bike data set, daily temperature, precipitation data, and air condition data to discover the trend of bike usage using multiple machine learning techniques such as Decision Tree, Naïve Bayes, and Random Forest. After conducting the research, each algorithm’s output is evaluated with performance comparisons such as accuracy, precision, or sensitivity. As a result, Random Forest is an efficient classifier for the bike demand prediction by weather and precipitation, and Decision Tree performs best for the bike demand prediction by air pollutants. Also, the three class labelings in the daily bike demand has high specificity, and is easy to trace the trend of the public bike system.
86

Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments

A'aeshah Abduallah Alhakamy (9371225) 16 December 2020 (has links)
Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360<sup>o</sup> camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum.
87

GAME-THEORETIC MODELING OF MULTI-AGENT SYSTEMS: APPLICATIONS IN SYSTEMS ENGINEERING AND ACQUISITION PROCESSES

Salar Safarkhani (9165011) 24 July 2020 (has links)
<div><div><div><p>The process of acquiring the large-scale complex systems is usually characterized with cost and schedule overruns. To investigate the causes of this problem, we may view the acquisition of a complex system in several different time scales. At finer time scales, one may study different stages of the acquisition process from the intricate details of the entire systems engineering process to communication between design teams to how individual designers solve problems. At the largest time scale one may consider the acquisition process as series of actions which are, request for bids, bidding and auctioning, contracting, and finally building and deploying the system, without resolving the fine details that occur within each step. In this work, we study the acquisition processes in multiple scales. First, we develop a game-theoretic model for engineering of the systems in the building and deploying stage. We model the interactions among the systems and subsystem engineers as a principal-agent problem. We develop a one-shot shallow systems engineering process and obtain the optimum transfer functions that best incentivize the subsystem engineers to maximize the expected system-level utility. The core of the principal-agent model is the quality function which maps the effort of the agent to the performance (quality) of the system. Therefore, we build the stochastic quality function by modeling the design process as a sequential decision-making problem. Second, we develop and evaluate a model of the acquisition process that accounts for the strategic behavior of different parties. We cast our model in terms of government-funded projects and assume the following steps. First, the government publishes a request for bids. Then, private firms offer their proposals in a bidding process and the winner bidder enters in a con- tract with the government. The contract describes the system requirements and the corresponding monetary transfers for meeting them. The winner firm devotes effort to deliver a system that fulfills the requirements. This can be assumed as a game that the government plays with the bidder firms. We study how different parameters in the acquisition procedure affect the bidders’ behaviors and therefore, the utility of the government. Using reinforcement learning, we seek to learn the optimal policies of involved actors in this game. In particular, we study how the requirements, contract types such as cost-plus and incentive-based contracts, number of bidders, problem complexity, etc., affect the acquisition procedure. Furthermore, we study the bidding strategy of the private firms and how the contract types affect their strategic behavior.</p></div></div></div>
88

On Higher Order Graph Representation Learning

Balasubramaniam Srinivasan (12463038) 26 April 2022 (has links)
<p>Research on graph representation learning (GRL) has made major strides over the past decade, with widespread applications in domains such as e-commerce, personalization, fraud & abuse, life sciences, and social network analysis. Despite its widespread success, fundamental questions on practices employed in modern day GRL have remained unanswered. Unraveling and advancing two such fundamental questions on the practices in modern day GRL forms the overarching theme of my thesis.</p> <p>The first part of my thesis deals with the mathematical foundations of GRL. GRL is used to solve tasks such as node classification, link prediction, clustering, graph classification, and so on, albeit with seemingly different frameworks (e.g. Graph neural networks for node/graph classification, (implicit) matrix factorization for link prediction/ clustering, etc.). The existence of very distinct frameworks for different graph tasks has puzzled researchers and practitioners alike. In my thesis, using group theory, I provide a theoretical blueprint that connects these seemingly different frameworks, bridging methods like matrix factorization and graph neural networks. With this renewed understanding, I then provide guidelines to better realize the full capabilities of these methods in a multitude of tasks.</p> <p>The second part of my thesis deals with cases where modeling real-world objects as a graph is an oversimplified description of the underlying data. Specifically, I look at two such objects (i) modeling hypergraphs (where edges encompass two or more vertices) and (ii) using GRL for predicting protein properties. Towards (i) hypergraphs, I develop a hypergraph neural network which takes advantage of the inherent sparsity of real world hypergraphs, without unduly sacrificing on its ability to distinguish non isomorphic hypergraphs. The designed hypergraph neural network is then leveraged to learn expressive representations of hyperedges for two tasks, namely hyperedge classification and hyperedge expansion. Experiments show that using our network results in improved performance over the current approach of converting the hypergraph into a dyadic graph and using (dyadic) GRL frameworks. Towards (ii) proteins, I introduce the concept of conditional invariances and leverage it to model the inherent flexibility present in proteins. Using conditional invariances, I provide a new framework for GRL which can capture protein-dependent conformations and ensures that all viable conformers of a protein obtain the same representation. Experiments show that endowing existing GRL models with my framework shows noticeable improvements on multiple different protein datasets and tasks.</p>
89

High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures

Thamir Qadah (11132985) 27 July 2021 (has links)
With the shifting landscape of computing hardware architectures and the emergence of new computing environments (e.g., large main-memory systems, hundreds of CPUs, distributed and virtualized cloud-based resources), state-of-the-art designs of transaction processing systems that rely on conventional wisdom suffer from lost performance optimization opportunities. This dissertation challenges conventional wisdom to rethink the design and implementation of transaction processing systems for modern computing environments.<div><br></div><div>We start by tackling the vertical hardware scaling challenge, and propose a deterministic approach to transaction processing on emerging multi-sockets, many-core, shared memory architecture to harness its unprecedented available parallelism. Our proposed priority-based queue-oriented transaction processing architecture eliminates the transaction contention footprint and uses speculative execution to improve the throughput of centralized deterministic transaction processing systems. We build QueCC and demonstrate up to two orders of magnitude better performance over the state-of-the-art.<br></div><div><br></div><div>We further tackle the horizontal scaling challenge and propose a distributed queue-oriented transaction processing engine that relies on queue-oriented communication to eliminate the traditional overhead of commitment protocols for multi-partition transactions. We build Q-Store, and demonstrate up to 22x improvement in system throughput over the state-of-the-art deterministic transaction processing systems.<br></div><div><br></div><div>Finally, we propose a generalized framework for designing distributed and replicated deterministic transaction processing systems. We introduce the concept of speculative replication to hide the latency overhead of replication. We prototype the speculative replication protocol in QR-Store and perform an extensive experimental evaluation using standard benchmarks. We show that QR-Store can achieve a throughput of 1.9 million replicated transactions per second in under 200 milliseconds and a replication overhead of 8%-25%compared to non-replicated configurations.<br></div>
90

Auditable Computations on (Un)Encrypted Graph-Structured Data

Servio Ernesto Palacios Interiano (8635641) 29 July 2020 (has links)
<div>Graph-structured data is pervasive. Modeling large-scale network-structured datasets require graph processing and management systems such as graph databases. Further, the analysis of graph-structured data often necessitates bulk downloads/uploads from/to the cloud or edge nodes. Unfortunately, experience has shown that malicious actors can compromise the confidentiality of highly-sensitive data stored in the cloud or shared nodes, even in an encrypted form. For particular use cases —multi-modal knowledge graphs, electronic health records, finance— network-structured datasets can be highly sensitive and require auditability, authentication, integrity protection, and privacy-preserving computation in a controlled and trusted environment, i.e., the traditional cloud computation is not suitable for these use cases. Similarly, many modern applications utilize a "shared, replicated database" approach to provide accountability and traceability. Those applications often suffer from significant privacy issues because every node in the network can access a copy of relevant contract code and data to guarantee the integrity of transactions and reach consensus, even in the presence of malicious actors.</div><div><br></div><div>This dissertation proposes breaking from the traditional cloud computation model, and instead ship certified pre-approved trusted code closer to the data to protect graph-structured data confidentiality. Further, our technique runs in a controlled environment in a trusted data owner node and provides proof of correct code execution. This computation can be audited in the future and provides the building block to automate a variety of real use cases that require preserving data ownership. This project utilizes trusted execution environments (TEEs) but does not rely solely on TEE's architecture to provide privacy for data and code. We thoughtfully examine the drawbacks of using trusted execution environments in cloud environments. Similarly, we analyze the privacy challenges exposed by the use of blockchain technologies to provide accountability and traceability.</div><div><br></div><div>First, we propose AGAPECert, an Auditable, Generalized, Automated, Privacy-Enabling, Certification framework capable of performing auditable computation on private graph-structured data and reporting real-time aggregate certification status without disclosing underlying private graph-structured data. AGAPECert utilizes a novel mix of trusted execution environments, blockchain technologies, and a real-time graph-based API standard to provide automated, oblivious, and auditable certification. This dissertation includes the invention of two core concepts that provide accountability, data provenance, and automation for the certification process: Oblivious Smart Contracts and Private Automated Certifications. Second, we contribute an auditable and integrity-preserving graph processing model called AuditGraph.io. AuditGraph.io utilizes a unique block-based layout and a multi-modal knowledge graph, potentially improving access locality, encryption, and integrity of highly-sensitive graph-structured data. Third, we contribute a unique data store and compute engine that facilitates the analysis and presentation of graph-structured data, i.e., TruenoDB. TruenoDB offers better throughput than the state-of-the-art. Finally, this dissertation proposes integrity-preserving streaming frameworks at the edge of the network with a personalized graph-based object lookup.</div>

Page generated in 0.1153 seconds