• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 40
  • 23
  • 20
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 306
  • 200
  • 90
  • 59
  • 52
  • 51
  • 41
  • 37
  • 36
  • 36
  • 33
  • 29
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor Model

Hawash, Maher Mofeid 30 May 2013 (has links)
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research area is focused on reversible, permutative and fully specified binary and ternary specifications and the applicability of the resulting circuit to the physical limitations of existing quantum technologies. Automated synthesis of arbitrary reversible specifications is an NP hard, multiobjective optimization problem, where 1) the amount of time and computational resources required to synthesize the specification, 2) the number of primitive quantum gates in the resulting circuit (quantum cost), and 3) the number of ancillary qubits (variables added to hold intermediate calculations) are all minimized while 4) the number of variables is maximized. Some of the existing algorithms in the literature ignored objective 2 by focusing on the synthesis of a single solution without the addition of any ancillary qubits while others attempted to explore every possible solution in the search space in an effort to discover the optimal solution (i.e., sacrificed objective 1 and 4). Other algorithms resorted to adding a huge number of ancillary qubits (counter to objective 3) in an effort minimize the number of primitive gates (objective 2). In this dissertation, I first introduce the MMDSN algorithm that is capable of synthesizing binary specifications up to 30 variables, does not add any ancillary variables, produces better quantum cost (8-50% improvement) than algorithms which limit their search to a single solution and within a minimal amount of time compared to algorithms which perform exhaustive search (seconds vs. hours). The MMDSN algorithm introduces an innovative method of using the Hasse diagram to construct candidate solutions that are guaranteed to be valid and then selects the solution with the minimal quantum cost out of this subset. I then introduce the Covered Set Partitions (CSP) algorithm that expands the search space of valid candidate solutions and allows for exploring solutions outside the range of MMDSN. I show a method of subdividing the expansive search landscape into smaller partitions and demonstrate the benefit of focusing on partition sizes that are around half of the number of variables (15% to 25% improvements, over MMDSN, for functions less than 12 variables, and more than 1000% improvement for functions with 12 and 13 variables). For a function of n variables, the CSP algorithm, theoretically, requires n times more to synthesize; however, by focusing on the middle k (k by MMDSN which typically yields lower quantum cost. I also show that using a Tabu search for selecting the next set of candidate from the CSP subset results in discovering solutions with even lower quantum costs (up to 10% improvement over CSP with random selection). In Chapters 9 and 10 I question the predominant methods of measuring quantum cost and its applicability to physical implementation of quantum gates and circuits. I counter the prevailing literature by introducing a new standard for measuring the performance of quantum synthesis algorithms by enforcing the Linear Nearest Neighbor Model (LNNM) constraint, which is imposed by the today's leading implementations of quantum technology. In addition to enforcing physical constraints, the new LNNM quantum cost (LNNQC) allows for a level comparison amongst all methods of synthesis; specifically, methods which add a large number of ancillary variables to ones that add no additional variables. I show that, when LNNM is enforced, the quantum cost for methods that add a large number of ancillary qubits increases significantly (up to 1200%). I also extend the Hasse based method to the ternary and I demonstrate synthesis of specifications of up to 9 ternary variables (compared to 3 ternary variables that existed in the literature). I introduce the concept of ternary precedence order and its implication on the construction of the Hasse diagram and the construction of valid candidate solutions. I also provide a case study comparing the performance of ternary logic synthesis of large functions using both a CUDA graphic processor with 1024 cores and an Intel i7 processor with 8 cores. In the process of exploring large ternary functions I introduce, to the literature, eight families of ternary benchmark functions along with a Multiple Valued file specification (the Extended Quantum Specification XQS). I also introduce a new composite quantum gate, the multiple valued Swivel gate, which swaps the information of qubits around a centrally located pivot point. In summary, my research objectives are as follows: * Explore and create automated synthesis algorithms for reversible circuits both in binary and ternary logic for large number of variables. * Study the impact of enforcing Linear Nearest Neighbor Model (LNNM) constraint for every interaction between qubits for reversible binary specifications. * Advocate for a revised metric for measuring the cost of a quantum circuit in concordance with LNNM, where, on one hand, such a metric would provide a way for balanced comparison between the various flavors of algorithms, and on the other hand, represents a realistic cost of a quantum circuit with respect to an ion trap implementation. * Establish an open source repository for sharing the results, software code and publications with the scientific community. With the dwindling expectations for a new lifeline on silicon-based technologies, quantum computations have the potential of becoming the future workhorse of computations. Similar to the automated CAD tools of classical logic, my work lays the foundation for creating automated tools for constructing quantum circuits from reversible specifications.
112

Common-Near-Neighbor Information in Discriminative Spaces for Human Re-identification / 人物照合のための識別空間中での共通近傍情報

Li, Wei 23 May 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第18482号 / 情博第533号 / 新制||情||94(附属図書館) / 31360 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 美濃 導彦, 教授 河原 達也, 教授 中村 裕一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
113

Practical Web-scale Recommender Systems / 実用的なWebスケール推薦システム / # ja-Kana

Tagami, Yukihiro 25 September 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21390号 / 情博第676号 / 新制||情||117(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 鹿島 久嗣, 教授 山本 章博, 教授 下平 英寿 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
114

INTERPOLATING HYDROLOGIC DATA USING LAPLACE FORMULATION

Tianle Xu (10802667) 14 May 2021 (has links)
Spatial interpolation techniques play an important role in hydrology as many point observations need to be interpolated to create continuous surfaces. Despite the availability of several tools and methods for interpolating data, <a>not all of them work consistently for hydrologic applications</a><a>. One of the techniques, Laplace Equation, which is used in hydrology for creating flownets, has rarely been used for interpolating hydrology data</a>. The objective of this study is to examine the efficiency of Laplace formulation (LF) in interpolating hydrologic data and compare it wih other widely used methods such as the inverse distance weighting (IDW), natural neighbor, and kriging. Comparison is performed quantitatively for using root mean square error (RMSE), visually for creating reasonable surfaces and computationally for ease of operation and speed. Data related to surface elevation, river bathymetry, precipitation, temperature, and soil moisture data are used for different areas in the United States. RMSE results show that LF performs better than IDW and is comparable to other methods for accuracy. LF is easy to use as it requires fewer input parameters compared to IDW and Kriging. Computationally, LF is comparable to other methods in terms of speed when the datasets are not large. Overall, LF offers an robust alternative to existing methods for interpolating various hydrology data. Further work is required to improve its computational efficiency with large data size and find out the effects of different cell size.
115

Classification of Dense Masses in Mammograms

Naram, Hari Prasad 01 May 2018 (has links) (PDF)
This dissertation material provided in this work details the techniques that are developed to aid in the Classification of tumors, non-tumors, and dense masses in a Mammogram, certain characteristics such as texture in a mammographic image are used to identify the regions of interest as a part of classification. Pattern recognizing techniques such as nearest mean classifier and Support vector machine classifier are also used to classify the features. The initial stages include the processing of mammographic image to extract the relevant features that would be necessary for classification and during the final stage the features are classified using the pattern recognizing techniques mentioned above. The goal of this research work is to provide the Medical Experts and Researchers an effective method which would aid them in identifying the tumors, non-tumors, and dense masses in a mammogram. At first the breast region extraction is carried using the entire mammogram. The extraction is carried out by creating the masks and using those masks to extract the region of interest pertaining to the tumor. A chain code is employed to extract the various regions, the extracted regions could potentially be classified as tumors, non-tumors, and dense regions. Adaptive histogram equalization technique is employed to enhance the contrast of an image. After applying the adaptive histogram equalization for several times which will provide a saturated image which would contain only bright spots of the mammographic image which appear like dense regions of the mammogram. These dense masses could be potential tumors which would need treatment. Relevant Characteristics such as texture in the mammographic image are used for feature extraction by using the nearest mean and support vector machine classifier. A total of thirteen Haralick features are used to classify the three classes. Support vector machine classifier is used to classify two class problems and radial basis function (RBF) kernel is used to find the best possible (c and gamma) values. Results obtained in this research suggest the best classification accuracy was achieved by using the support vector machines for both Tumor vs Non-Tumor and Tumor vs Dense masses. The maximum accuracies achieved for the tumor and non-tumor is above 90 % and for the dense masses is 70.8% using 11 features for support vector machines. Support vector machines performed better than the nearest mean majority classifier in the classification of the classes. Various case studies were performed using two distinct datasets in which each dataset consisting of 24 patients’ data in two individual views. Each patient data will consist of both the cranio caudal view and medio lateral oblique views. From these views the region of interest which could possibly be a tumor, non-tumor, or a dense regions(mass).
116

Maskininlärning som medel för att betygsätta samtal med språklärande syfte mellan robot och människa / Machine learning as tool to grade language learning conversations between robot and human

Melander, Gustav, Wänlund, Robin January 2019 (has links)
Det svenska företaget Furhat Robotics har skapat en robot kallad Furhat vilken är kapabel till att interagera med människor i språkcafé-liknande miljöer. Syftet med den robotledda konversationen är att utveckla deltagarnas språkkunskaper, vilka efter varje konversation får svara på en enkät om vad de tyckte om samtalet med Furhat. Ur detta har frågan huruvida det är möjligt att förutspå vad deltagarna tyckte om samtalet baserat på konversationens struktur uppstått. Syftet med denna rapport är att analysera huruvida det är möjligt att kvantifiera konversationerna och förutspå svaren i enkäten med hjälp av maskininlärning. Det dataset som rapporten baserar sig på erhölls från tidigare studier i Kollaborativ Robotassisterad Språkinlärning (Collaborative Robot Assisted Language Learning). Resultaten visade på ett RMSE högre än variansen för medelvärdet av enkätsvaren vilket indikerar att den framtagna modellen inte är särskilt effektiv. Modellen presterade dock bättre i vissa förutsägelser då varje enskilt enkätsvar förutspåddes var för sig. Detta antyder att modellen skulle kunna användas till vissa frågeformuleringar / The Swedish company Furhat Robotic have created a robot called Furhat, which is able to interact with humans in a language café setting. The purpose of the robot led conversation is for the participants to develop their language skills. After the conversation the humans will answer a survey about what they thought about the conversation with Furhat. A question that has arisen from this is if it is possible to predict the survey answers based on just the conversation. The purpose of this paper is to analyze if it is possible to quantify the conversations linked to the survey answers, and by doing so be able to predict the answers in new conversations with a machine learning approach. The data set being used was obtained from an earlier study in Collaborative Robot Assisted Language Learning. The result returned a RMSE that was greater than the variance of the average conversation score which indicates that the model is not very effective. However, it excelled in some predictions trying to give scores to each separate survey answer, indicating that the model could be used for certain question formulations.
117

A GENE ONTOLOGY BASED COMPUTATIONAL APPROACH FOR THE PREDICTION OF PROTEIN FUNCTIONS

Kharsikar, Saket 13 September 2007 (has links)
No description available.
118

A Direct Algorithm for the K-Nearest-Neighbor Classifier via Local Warping of the Distance Metric

Neo, TohKoon 30 November 2007 (has links) (PDF)
The k-nearest neighbor (k-NN) pattern classifier is a simple yet effective learner. However, it has a few drawbacks, one of which is the large model size. There are a number of algorithms that are able to condense the model size of the k-NN classifier at the expense of accuracy. Boosting is therefore desirable for increasing the accuracy of these condensed models. Unfortunately, there does not exist a boosting algorithm that works well with k-NN directly. We present a direct boosting algorithm for the k-NN classifier that creates an ensemble of models with locally modified distance weighting. An empirical study conducted on 10 standard databases from the UCI repository shows that this new Boosted k-NN algorithm has increased generalization accuracy in the majority of the datasets and never performs worse than standard k-NN.
119

Advancing the Effectiveness of Non-Linear Dimensionality Reduction Techniques

Gashler, Michael S. 18 May 2012 (has links) (PDF)
Data that is represented with high dimensionality presents a computational complexity challenge for many existing algorithms. Limiting dimensionality by discarding attributes is sometimes a poor solution to this problem because significant high-level concepts may be encoded in the data across many or all of the attributes. Non-linear dimensionality reduction (NLDR) techniques have been successful with many problems at minimizing dimensionality while preserving intrinsic high-level concepts that are encoded with varying combinations of attributes. Unfortunately, many challenges remain with existing NLDR techniques, including excessive computational requirements, an inability to benefit from prior knowledge, and an inability to handle certain difficult conditions that occur in data with many real-world problems. Further, certain practical factors have limited advancement in NLDR, such as a lack of clarity regarding suitable applications for NLDR, and a general inavailability of efficient implementations of complex algorithms. This dissertation presents a collection of papers that advance the state of NLDR in each of these areas. Contributions of this dissertation include: • An NLDR algorithm, called Manifold Sculpting, that optimizes its solution using graduated optimization. This approach enables it to obtain better results than methods that only optimize an approximate problem. Additionally, Manifold Sculpting can benefit from prior knowledge about the problem. • An intelligent neighbor-finding technique called SAFFRON that improves the breadth of problems that existing NLDR techniques can handle. • A neighborhood refinement technique called CycleCut that further increases the robustness of existing NLDR techniques, and that can work in conjunction with SAFFRON to solve difficult problems. • Demonstrations of specific applications for NLDR techniques, including the estimation of state within dynamical systems, training of recurrent neural networks, and imputing missing values in data. • An open source toolkit containing each of the techniques described in this dissertation, as well as several existing NLDR algorithms, and other useful machine learning methods.
120

Generating Exploration Mission-3 Trajectories to a 9:2 NRHO Using Machine Learning

Guzman, Esteban 01 December 2018 (has links) (PDF)
The purpose of this thesis is to design a machine learning algorithm platform that provides expanded knowledge of mission availability through a launch season by improving trajectory resolution and introducing launch mission forecasting. The specific scenario addressed in this paper is one in which data is provided for four deterministic translational maneuvers through a mission to a Near Rectilinear Halo Orbit (NRHO) with a 9:2 synodic frequency. Current launch availability knowledge under NASA’s Orion Orbit Performance Team is established by altering optimization variables associated to given reference launch epochs. This current method can be an abstract task and relies on an orbit analyst to structure a mission based off an established mission design methodology associated to the performance of Orion and NASA's Space Launch System. Introducing a machine learning algorithm trained to construct mission scenarios within the feasible range of known trajectories reduces the required interaction of the orbit analyst by removing the needed step of optimizing the orbit to fit an expected translational response required of the spacecraft. In this study, k-Nearest Neighbor and Bayesian Linear Regression successfully predicted classical orbital elements for the launch windows observed. However both algorithms had limitations due to their approaches to model fitting. Training machine learning algorithms off of classical orbital elements introduced a repetitive approach to reconstructing mission segments for different arrival opportunities through the launch window and can prove to be a viable method of launch window scan generation for future missions.

Page generated in 0.0739 seconds