• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7517
  • 1106
  • 1048
  • 794
  • 483
  • 291
  • 237
  • 184
  • 90
  • 81
  • 64
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14536
  • 9347
  • 3969
  • 2378
  • 1933
  • 1930
  • 1738
  • 1648
  • 1534
  • 1449
  • 1382
  • 1360
  • 1358
  • 1302
  • 1282
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

A Comparative Analysis of Web-based Machine Translation Quality: English to French and French to English

Barnhart, Zachary 12 1900 (has links)
This study offers a partial reduplication of a 2006 study by Williams, which focused primarily on the analysis of the quality of translation produced by online software, namely Yahoo!® Babelfish, Freetranslation.com, and Google Translate. Since the data for the study by Williams were collected in 2004 and the data for present study in 2012, this gives a lapse of eight years for a diachronic analysis of the differences in quality of the translations provided by these online services. At the time of the 2006 study by Williams, all three services used a rule-based translation system, but, in October 2007, however, Google Translate switched to a system that is entirely statistical in nature. Thus, the present study is also able to examine the differences in quality between contemporary statistical and rule-based approaches to machine translation.
602

Optimal choice of machine tool for a machining job in a CAE environment

Kumar, Eshwar January 2010 (has links)
Developments in cutting tools, coolants, drives, controls, tool changers, pallet changers and the philosophy of machine tool design have made ground breaking changes in machine tools and machining processes. Modern Machining Centres have been developed to perform several operations on several faces of a workpiece in a single setup. On the other hand industry requires high value added components, which have many quality critical features to be manufactured in an outsourcing environment as opposed to the traditional in-house manufacture. The success of this manufacture critically depends on matching the advanced features of the machine tools to the complexity of the component. This project has developed a methodology to represent the features of a machine tool in the form of an alphanumeric string and the features of the component in another string. The strings are then matched to choose the most suitable and economical Machine Tool for the component’s manufacture. Literature identified that block structure is the way to answer the question ‘how to systematically describe the layout of such a machining centre’. Incomplete attempts to describe a block structure as alphanumeric strings were also presented in the literature. Survey on sales literature from several machine tool suppliers was investigated to systematically identify the features need by the user for the choice of a machine tool. Combining these, a new alphanumeric string was developed to represent machine tools. Using these strings as one of the ‘key’s for sorting a database of machine tools was developed. A supporting database of machine tools was also developed. Survey on machining on the other hand identified, that machining features can be used as a basis for planning the machining of a component. It analysed various features and feature sets proposed and provided and their recognition in CAD models. Though a vast number of features were described only two sets were complete sets. The project was started with one of them, (the other was carrying too many unwanted details for the task of this project) machining features supported by ‘Expert Machinist’ software. But when it became unavailable a ‘Feature set’ along those lines were defined and used in the generation of an alphanumeric string to represent the work. Comparing the two strings led the choice of suitable machines from the database. The methodology is implemented as a bolt on software incorporated within Pro/Engineer software where one can model any given component using cut features (mimicking machining operation) and produce a list of machine tools having features for the machining of that component. This will enable outsourcing companies to identify those Precision Engineers who have the machine tools with the matching apabilities. Supporting software and databases were developed using Access Database, Visual Basic and C with Pro/TOOLKIT functions. The resulting software suite was tested on several case studies and found to be effective.
603

Monitoring of tool wear in turning operations using vibration measurements

Scheffer, Cornelius 21 December 2006 (has links)
This study investigates the use of vibration and strain measurements on machine tools in order to identify the propagating wear of the selected tools. Two case studies are considered, one of which was conducted in the plant of a South African piston manufacturer. The purpose of the ftrst case study was to investigate the feasibility of vibration monitoring to identify tool wear, before attempting to implement a monitoring system in the manufacturing plant. During this case study, data from a turning process was recorded using two accelerometers coupled to a PL202 real time FFT analyser. Features indicative of tool wear were extracted from the sensor signals, and then used as inputs to a Self-Organising Map (SOM). The SOM is a type of neural network based on unsupervised learning, and can be used to classify the input data into regions corresponding to new and worn tools. It was also shown that the SOM can also be used very efficiently with discrete variables. One of the main contributions of the second case study was the fact that a unique type of tool was investigated, namely a synthetic diamond tool specifically used for the manufacturing of aluminium pistons. Data from the manufacturing of pistons was recorded with two piezoelectric strain sensors and a single accelerometer, all coupled to a DSPT Siglab analyser. A large number of features indicative of tool wear were automatically extracted from different parts of the original signals. These included features from time and frequency domain data, time series model coefficients as features and features extracted from wavelet packet analysis. A correlation coefficient approach was used to auto-lJUltically select the best features indicative of the progressive wear of the diamond tools. The SOM was once again used to identify the tool state. Some of the advantages of using different map sizes on the SOM were also demonstrated. A near 100% correct classification of the tool wear data was obtained by training the SOM with two independent data sets, and testing it with a third independent data set. It was also shown that the monitoring strategy proposed in the second case study can be fully automated and can be implemented on-line if the manufacturer wishes to. Some of the contributions of this study are the use of the SOM for tool wear classification, and conclusions regarding the wear modes of the synthetic diamond tools. / Dissertation (M Eng (Mechanical Engineering))--University of Pretoria, 2006. / Mechanical and Aeronautical Engineering / unrestricted
604

Remote control of frequency inverter

Joel, Jaldemark January 2020 (has links)
Emotron has a frequency inverter on the market that different industries uses in their factories. In case of errors they need to send out service to the factories in order to examine the inverter and find the error. They now want a solution that makes it possible for them to give support without leaving the office by connecting their devices to the cloud which eliminates the need to send out staff to industries. Emotron gave this task to HMS and has been possible with their product Anybus wiress bolt. By connecting the Anybus wireless bolt to the inverter it was possible to communicate with the cloud, MicrosoftAzure, where a static webb application is hosted. The application is made to look like the terminal on the inverter and has similiar structures and functionality. Through the application users can communicate withthe inverter by means of controlling the connected motor, reading registers and also write to certain registers. These registers contain different measurement and option parameters. The purpose of this thesis was to create a Proof-of-Concept solution using the Anybus wireless bolt. The thesis has shown of industries can use Anybus wireless bolt and the tag engine to make it possible to create a link between machines and the ever- growing cloud and is also the first part of a bigger project.
605

Návrh kompaktního stroje pro třískové obrábění klíčů / Entwurf einer Kompaktmaschine zur Schlüsselbearbeitung

Žůrek, František January 2017 (has links)
This master thesis Design of a machine tool for a safety keys production deals with concept of a machine tool for production of safety keys. Solution variants are methodically elaborated, mainly concerning their achieved tact times and machine dimensions. A computation diagram for fast comparison of concepts in case of different customer key specification is presented. Chosen concept version is then detailed designed. The result fills a hole on the market of specific machine tools for machining of safety keys.
606

Interpretation, Verification and Privacy Techniques for Improving the Trustworthiness of Neural Networks

Dethise, Arnaud 22 March 2023 (has links)
Neural Networks are powerful tools used in Machine Learning to solve complex problems across many domains, including biological classification, self-driving cars, and automated management of distributed systems. However, practitioners' trust in Neural Network models is limited by their inability to answer important questions about their behavior, such as whether they will perform correctly or if they can be entrusted with private data. One major issue with Neural Networks is their "black-box" nature, which makes it challenging to inspect the trained parameters or to understand the learned function. To address this issue, this thesis proposes several new ways to increase the trustworthiness of Neural Network models. The first approach focuses specifically on Piecewise Linear Neural Networks, a popular flavor of Neural Networks used to tackle many practical problems. The thesis explores several different techniques to extract the weights of trained networks efficiently and use them to verify and understand the behavior of the models. The second approach shows how strengthening the training algorithms can provide guarantees that are theoretically proven to hold even for the black-box model. The first part of the thesis identifies errors that can exist in trained Neural Networks, highlighting the importance of domain knowledge and the pitfalls to avoid with trained models. The second part aims to verify the outputs and decisions of the model by adapting the technique of Mixed Integer Linear Programming to efficiently explore the possible states of the Neural Network and verify properties of its outputs. The third part extends the Linear Programming technique to explain the behavior of a Piecewise Linear Neural Network by breaking it down into its linear components, generating model explanations that are both continuous on the input features and without approximations. Finally, the thesis addresses privacy concerns by using Trusted Execution and Differential Privacy during the training process. The techniques proposed in this thesis provide strong, theoretically provable guarantees about Neural Networks, despite their black-box nature, and enable practitioners to verify, extend, and protect the privacy of expert domain knowledge. By improving the trustworthiness of models, these techniques make Neural Networks more likely to be deployed in real-world applications.
607

Stone Soup Translation: The Linked Automata Model

Davis, Paul C. 02 July 2002 (has links)
No description available.
608

Integrated Process Modeling and Data Analytics for Optimizing Polyolefin Manufacturing

Sharma, Niket 19 November 2021 (has links)
Polyolefins are one of the most widely used commodity polymers with applications in films, packaging and automotive industry. The modeling of polymerization processes producing polyolefins, including high-density polyethylene (HDPE), polypropylene (PP), and linear low-density polyethylene (LLDPE) using Ziegler-Natta catalysts with multiple active sites, is a complex and challenging task. In our study, we integrate process modeling and data analytics for improving and optimizing polyolefin manufacturing processes. Most of the current literature on polyolefin modeling does not consider all of the commercially important production targets when quantifying the relevant polymerization reactions and their kinetic parameters based on measurable plant data. We develop an effective methodology to estimate kinetic parameters that have the most significant impacts on specific production targets, and to develop the kinetics using all commercially important production targets validated over industrial polyolefin processes. We showcase the utility of dynamic models for efficient grade transition in polyolefin processes. We also use the dynamic models for inferential control of polymer processes. Thus, we showcase the methodology for making first-principle polyolefin process models which are scientifically consistent, but tend to be less accurate due to many modeling assumptions in a complex system. Data analytics and machine learning (ML) have been applied in the chemical process industry for accurate predictions for data-based soft sensors and process monitoring/control. Specifically, for polymer processes, they are very useful since the polymer quality measurements like polymer melt index, molecular weight etc. are usually less frequent compared to the continuous process variable measurements. We showcase the use of predictive machine learning models like neural networks for predicting polymer quality indicators and demonstrate the utility of causal models like partial least squares to study the causal effect of the process parameters on the polymer quality variables. ML models produce accurate results can over-fit the data and also produce scientifically inconsistent results beyond the operating data range. Thus, it is growingly important to develop hybrid models combining data-based ML models and first-principle models. We present a broad perspective of hybrid process modeling and optimization combining the scientific knowledge and data analytics in bioprocessing and chemical engineering with a science-guided machine learning (SGML) approach and not just the direct combinations of first-principle and ML models. We present a detailed review of scientific literature relating to the hybrid SGML approach, and propose a systematic classification of hybrid SGML models according to their methodology and objective. We identify the themes and methodologies which have not been explored much in chemical engineering applications, like the use of scientific knowledge to help improve the ML model architecture and learning process for more scientifically consistent solutions. We apply these hybrid SGML techniques to industrial polyolefin processes such as inverse modeling, science guided loss and many others which have not been applied previously to such polymer applications. / Doctor of Philosophy / Almost everything we see around us from furniture, electronics to bottles, cars, etc. are made fully or partially from plastic polymers. The two most popular polymers which comprise almost two-thirds of polymer production globally are polyethylene (PE) and polypropylene (PP), collectively known as polyolefins. Hence, the optimization of polyolefin manufacturing processes with the aid of simulation models is critical and profitable for chemical industry. Modeling of a chemical/polymer process is helpful for process-scale up, product quality estimation/monitoring and new process development. For making a good simulation model, we need to validate the predictions with actual industrial data. Polyolefin process has complex reaction kinetics with multiple parameters that need to be estimated to accurately match the industrial process. We have developed a novel strategy for estimating the kinetics for the model, including the reaction chemistry and the polymer quality information validating with industrial process. Thus, we have developed a science-based model which includes the knowledge of reaction kinetics, thermodynamics, heat and mass balance for the polyolefin process. The science-based model is scientifically consistent, but may not be very accurate due to many model assumptions. Therefore, for applications requiring very high accuracy predicting any polymer quality targets such as melt index (MI), density, data-based techniques might be more appropriate. Recently, we may have heard a lot about artificial intelligence (AI) and machine learning (ML) the basic principle behind these methods is to making the model learn from data for prediction. The process data that are measured in a chemical/polymer plant can be utilized for data analysis. We can build ML models to predict polymer targets like MI as a function of the input process variables. The ML model predictions are very accurate in the process operating range of the dataset on which the model is learned, but outside the prediction range, they may tend to give scientifically inconsistent results. Thus, there is a need to combine the data-based models and scientific models. In our research, we showcase novel approaches to integrate the science-based models and the data-based ML methodology which we term as the hybrid science-guided machine learning methods (SGML). The hybrid SGML methods applied to polyolefin processes yield not only accurate, but scientifically consistent predictions which can be used for polyolefin process optimization for applications like process development and quality monitoring.
609

Leveraging Infrared Imaging with Machine Learning for Phenotypic Profiling

Liu, Xinwen January 2024 (has links)
Phenotypic profiling systematically maps and analyzes observable traits (phenotypes) exhibited in cells, tissues, organisms or systems in response to various conditions, including chemical, genetic and disease perturbations. This approach seeks to comprehensively understand the functional consequences of perturbations on biological systems, thereby informing diverse research areas such as drug discovery, disease modeling, functional genomics and systems biology. Corresponding techniques should capture high-dimensional features to distinguish phenotypes affected by different conditions. Current methods mainly include fluorescence imaging, mass spectrometry and omics technologies, coupled with computational analysis, to quantify diverse features such as morphology, metabolism and gene expression in response to perturbations. Yet, they face challenges of high costs, complicated operations and strong batch effects. Vibrational imaging offers an alternative for phenotypic profiling, providing a sensitive, cost-effective and easily operated approach to capture the biochemical fingerprint of phenotypes. Among vibrational imaging techniques, infrared (IR) imaging has further advantages of high throughput, fast imaging speed and full spectrum coverage compared with Raman imaging. However, current biomedical applications of IR imaging mainly concentrate on "digital disease pathology", which uses label-free IR imaging with machine learning for tissue pathology classification and disease diagnosis. The thesis contributes as the first comprehensive study of using IR imaging for phenotypic profiling, focusing on three key areas. First, IR-active vibrational probes are systematically designed to enhance metabolic specificity, thereby enriching measured features and improving sensitivity and specificity for phenotype discrimination. Second, experimental workflows are established for phenotypic profiling using IR imaging across biological samples at various levels, including cellular, tissue and organ, in response to drug and disease perturbations. Lastly, complete data analysis pipelines are developed, including data preprocessing, statistical analysis and machine learning methods, with additional algorithmic developments for analyzing and mapping phenotypes. Chapter 1 lays the groundwork for IR imaging by delving into the theory of IR spectroscopy theory and the instrumentation of IR imaging, establishing a foundation for subsequent studies. Chapter 2 discusses the principles of popular machine learning methods applied in IR imaging, including supervised learning, unsupervised learning and deep learning, providing the algorithmic backbone for later chapters. Additionally, it provides an overview of existing biomedical applications using label-free IR imaging combined with machine learning, facilitating a deeper understanding of the current research landscape and the focal points of IR imaging for traditional biomedical studies. Chapter 3-5 focus on applying IR imaging coupled with machine learning for novel application of phenotypic profiling. Chapter 3 explores the design and development of IR-active vibrational probes for IR imaging. Three types of vibrational probes, including azide, 13C-based probes and deuterium-based probes are introduced to study dynamic metabolic activities of protein, lipids and carbohydrates in cells, small organisms and mice for the first time. The developed probes largely improve the metabolic specificity of IR imaging, enhancing the sensitivity of IR imaging towards different phenotypes. Chapter 4 studies the combination of IR imaging, heavy water labeling and unsupervised learning for tissue metabolic profiling, which provides a novel method to map metabolic tissue atlas in complex mammalian systems. In particular, cell type-, tissue- and organ-specific metabolic profiles are identified with spatial information in situ. In addition, this method further captures metabolic changes during brain development and characterized intratumor metabolic heterogeneity of glioblastoma, showing great promise for disease modeling. Chapter 5 developed Vibrational Painting (VIBRANT), a method using IR imaging, multiplexed vibrational probes and supervised learning for cellular phenotypic profiling of drug perturbations. Three IR-active vibrational probes were designed to measure distinct essential metabolic activities in human cancer cells. More than 20,000 single-cell drug responses were collected, corresponding to 23 drug treatments. Supervised learning is used to accurately predict drug mechanism of action at single-cell level with minimal batch effects. We further designed an algorithm to discover drug candidates with novel mechanisms of action and evaluate drug combinations. Overall, VIBRANT has demonstrated great potential across multiple areas of phenotypic drug screening.
610

A Virtual Reality-Based Study of Dependable Human-Machine Interfaces for Communication between Humans and Autonomous or Teleoperated Construction Machines

Sunding, Nikita, Johansson, Amanda January 2023 (has links)
The study aimed to identify and analyse methods for establishing external communication between humans and autonomous/teleoperated machines/vehicles using various Human-Machine Interfaces (HMIs). The study was divided into three phases. The purpose of the first phase was to identify and highlight previously tested/researched methods for establishing external communication by conducting a literature review. The findings from the literature review were categorised into six points of interest: machine indications, test delivery methods, HMI technologies/types, symbols, textual/numerical messages, and colours associated with different indications. Based on these findings, four HMIs (projection, display, LED-strip, and auditory) were selected for evaluation in a virtual reality environment for the second phase of the study, which has the purpose of identifying which of the human-machine interfaces can effectively communicate the intentions of autonomous/teleoperated machines to humans. The results of phase two indicate that the participants preferred projection as the most effective individual HMI, and when given the option to combine two HMIs, projection combined with auditory was the most preferred combination. The participants were also asked to pick three HMIs of their choosing, resulting in the projection, display and audible HMI combination being the preferred option. The evaluation of HMIs in a virtual reality environment contributes to improving dependability and identifying usability issues.  The objective of the third and final phase was to gather all the findings from the previous phases and subsequently refine the report until it was considered finalised. Future work includes enhancing the realism of the VR environment, refining machine behaviour and scenarios, enabling multiple participants to simultaneously interact with the environment, and exploring alternative evaluation methods. Addressing these areas will lead to more realistic evaluations and advancements in human-machine interaction research.

Page generated in 0.0406 seconds