• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 671
  • 188
  • 90
  • 90
  • 55
  • 29
  • 18
  • 18
  • 18
  • 16
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1425
  • 369
  • 209
  • 166
  • 160
  • 145
  • 130
  • 127
  • 115
  • 105
  • 101
  • 92
  • 88
  • 75
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
791

Rates of Convergence and Microscopic Information in Random Matrix Theory

Taljan, Kyle 25 January 2022 (has links)
No description available.
792

Vox Machina

Ferguson, Sean January 1993 (has links)
No description available.
793

Temps en temps = (Times in time) : music for voice and instruments in a multi-track recording environment

Beaulieu, Marc. January 1996 (has links)
No description available.
794

Epithalame

Caron, Claude. January 1983 (has links)
No description available.
795

Traffic Signal Phase and Timing Prediction: A Machine Learning and Controller Logic Hybrid Approach

Eteifa, Seifeldeen Omar 14 March 2024 (has links)
Green light optimal speed advisory (GLOSA) systems require reliable estimates of signal switching times to improve vehicle energy/fuel efficiency. Deployment of successful infrastructure to vehicle communication requires Signal Phase and Timing (SPaT) messages to be populated with most likely estimates of switching times and confidence levels in these estimates. Obtaining these estimates is difficult for actuated signals where the length of each green indication changes to accommodate varying traffic conditions and pedestrian requests. This dissertation explores the different ways in which predictions can be made for the most likely switching times. Data are gathered from six intersections along the Gallows Road corridor in Northern Virginia. The application of long-short term memory neural networks for obtaining predictions is explored for one of the intersections. Different loss functions are tried for the purpose of prediction and a new loss function is devised. Mean absolute percentage error is found to be the best loss function in the short-term predictions. Mean squared error is the best for long-term predictions and the proposed loss function balances both well. The amount of historical data needed to make a single accurate prediction is assessed. The assessment concludes that the short-term prediction is accurate with only a 3 to 10 second time window in the past as long as the training dataset is large enough. Long term prediction, however, is better with a larger past time window. The robustness of LSTM models to different demand levels is then assessed utilizing the unique scenario created by the COVID-19 pandemic stay-at-home order. The study shows that the models are robust to the changing demands and while regularization does not really affect their robustness, L1 and L2 regularization can improve the overall prediction performance. An ensemble approach is used considering the use of transformers for SPaT prediction for the first time across the six intersections. Transformers are shown to outperform other models including LSTM. The ensemble provides a valuable metric to show the certainty level in each of the predictions through the level of consensus of the models. Finally, a hybrid approach integrating deep learning and controller logic is proposed by predicting actuations separately and using a digital twin to replicate SPaT information. The approach is proven to be the best approach with 58% less mean absolute error than other approaches. Overall, this dissertation provides a holistic methodology for predicting SPaT and the certainty level associated with it tailored to the existing technology and communication needs. / Doctor of Philosophy / Automated and connected vehicles waste a lot of fuel and energy to stop and go at traffic signals. The ideal case is for them to be able to know when the traffic signal turns green ahead of time and plan to reach the intersection by the time it is green, so they do not have to stop. Not having to stop can save up to 40 percent of the gas used at the intersection. This is a difficult task because the green time is not fixed. It has a minimum and maximum setting, and it keeps extending the green every time a new vehicle arrives. While this is good for adapting to traffic, it makes it difficult to know exactly when the traffic signal turns green to reach the intersection at that time. In this dissertation, different models to know ahead of time when the traffic signal will change are used. A model is chosen known as long-short term memory neural network (LSTM), which is a way to recognize how the traffic signal is expected to behave in the future from its past behavior. The point is to reduce the errors in the predictions. The first thing is to look at the loss function, which is how the model deals with error. It is found that the best thing is to take the average of the absolute value of the error as a percentage of the prediction if the prediction is that traffic signal will change soon. If it is a longer time until the traffic signal changes, the best way is to take the average of the square of the error. Finally, another function is introduced to balance between both. The second thing explored is how far back in time data was needed to be given to the model to predict accurately. For predictions of less than 20 seconds in the future, only 3 to 10 seconds in the past are needed. For predictions further in the future, looking further back can be useful. The third thing explored was how these models would do after rare events like COVID-19 pandemic. It was found that even though much fewer cars were passing through the intersections, the models still had low errors. Techniques were used to reduce the model reliance on specific data known as regularization techniques. This did not help the models to do better after COVID, but two techniques known as L1 and L2 regularization improved overall performance. The study was then expanded to include 6 intersections and used three additional models in addition to LSTM. One of these models, known as transformers, has never been used before for this problem and was shown to make better predictions than other models. The consensus between the models, which is how many of the models agree on the prediction, was used as a measure for certainty in the prediction. It was proven to be a good indicator. An approach is then introduced that combines the knowledge of the traffic signal controller logic with the powerful predictions of machine learning models. This is done by making a computer program that replicates the logic of the traffic signal controller known as a digital twin. Machine learning models are then used to predict vehicle arrivals. The program is then run using the predicted arrivals to provide a replication of the signal timing. This approach is found to be the best approach with 58 percent less error than the other approaches. Overall, this dissertation provides an end-to-end solution that uses real data generated from intersections to predict the time to green and estimate the certainty in prediction that can help automated and connected vehicles be more fuel efficient.
796

Tracking time evolving data streams for short-term traffic forecasting

Abdullatif, Amr R.A., Masulli, F., Rovetta, S. 20 January 2020 (has links)
Yes / Data streams have arisen as a relevant topic during the last few years as an efficient method for extracting knowledge from big data. In the robust layered ensemble model (RLEM) proposed in this paper for short-term traffic flow forecasting, incoming traffic flow data of all connected road links are organized in chunks corresponding to an optimal time lag. The RLEM model is composed of two layers. In the first layer, we cluster the chunks by using the Graded Possibilistic c-Means method. The second layer is made up by an ensemble of forecasters, each of them trained for short-term traffic flow forecasting on the chunks belonging to a specific cluster. In the operational phase, as a new chunk of traffic flow data presented as input to the RLEM, its memberships to all clusters are evaluated, and if it is not recognized as an outlier, the outputs of all forecasters are combined in an ensemble, obtaining in this a way a forecasting of traffic flow for a short-term time horizon. The proposed RLEM model is evaluated on a synthetic data set, on a traffic flow data simulator and on two real-world traffic flow data sets. The model gives an accurate forecasting of the traffic flow rates with outlier detection and shows a good adaptation to non-stationary traffic regimes. Given its characteristics of outlier detection, accuracy, and robustness, RLEM can be fruitfully integrated in traffic flow management systems.
797

Vad händer på berget Olympos : Writing and producing a performance for a baroque ensemble

Linde, Alexandra January 2023 (has links)
My work consists of the research of how to effectively write new material for a small baroque opera ensemble. How to make it possible to create a storyline that embarks through contrasting arias and duets from different arias with different synopsis.
798

Machine Learning and Multivariate Statistics for Optimizing Bioprocessing and Polyolefin Manufacturing

Agarwal, Aman 07 January 2022 (has links)
Chemical engineers have routinely used computational tools for modeling, optimizing, and debottlenecking chemical processes. Because of the advances in computational science over the past decade, multivariate statistics and machine learning have become an integral part of the computerization of chemical processes. In this research, we look into using multivariate statistics, machine learning tools, and their combinations through a series of case studies including a case with a successful industrial deployment of machine learning models for fermentation. We use both commercially-available software tools, Aspen ProMV and Python, to demonstrate the feasibility of the computational tools. This work demonstrates a novel application of ensemble-based machine learning methods in bioprocessing, particularly for the prediction of different fermenter types in a fermentation process (to allow for successful data integration) and the prediction of the onset of foaming. We apply two ensemble frameworks, Extreme Gradient Boosting (XGBoost) and Random Forest (RF), to build classification and regression models. Excessive foaming can interfere with the mixing of reactants and lead to problems, such as decreasing effective reactor volume, microbial contamination, product loss, and increased reaction time. Physical modeling of foaming is an arduous process as it requires estimation of foam height, which is dynamic in nature and varies for different processes. In addition to foaming prediction, we extend our work to control and prevent foaming by allowing data-driven ad hoc addition of antifoam using exhaust differential pressure as an indicator of foaming. We use large-scale real fermentation data for six different types of sporulating microorganisms to predict foaming over multiple strains of microorganisms and build exploratory time-series driven antifoam profiles for four different fermenter types. In order to successfully predict the antifoam addition from the large-scale multivariate dataset (about half a million instances for 163 batches), we use TPOT (Tree-based Pipeline Optimization Tool), an automated genetic programming algorithm, to find the best pipeline from 600 other pipelines. Our antifoam profiles are able to decrease hourly volume retention by over 53% for a specific fermenter. A decrease in hourly volume retention leads to an increase in fermentation product yield. We also study two different cases associated with the manufacturing of polyolefins, particularly LDPE (low-density polyethylene) and HDPE (high-density polyethylene). Through these cases, we showcase the usage of machine learning and multivariate statistical tools to improve process understanding and enhance the predictive capability for process optimization. By using indirect measurements such as temperature profiles, we demonstrate the viability of such measures in the prediction of polyolefin quality parameters, anomaly detection, and statistical monitoring and control of the chemical processes associated with a LDPE plant. We use dimensionality reduction, visualization tools, and regression analysis to achieve our goals. Using advanced analytical tools and a combination of algorithms such as PCA (Principal Component Analysis), PLS (Partial Least Squares), Random Forest, etc., we identify predictive models that can be used to create inferential schemes. Soft-sensors are widely used for on-line monitoring and real-time prediction of process variables. In one of our cases, we use advanced machine learning algorithms to predict the polymer melt index, which is crucial in determining the product quality of polymers. We use real industrial data from one of the leading chemical engineering companies in the Asia-Pacific region to build a predictive model for a HDPE plant. Lastly, we show an end-to-end workflow for deep learning on both industrial and simulated polyolefin datasets. Thus, using these five cases, we explore the usage of advanced machine learning and multivariate statistical techniques in the optimization of chemical and biochemical processes. The recent advances in computational hardware allow engineers to design such data-driven models, which enhances their capacity to effectively and efficiently monitor and control a process. We showcase that even non-expert chemical engineers can implement such machine learning algorithms with ease using open-source or commercially available software tools. / Doctor of Philosophy / Most chemical and biochemical processes are equipped with advanced probes and connectivity sensors that collect large amounts of data on a daily basis. It is critical to manage and utilize the significant amount of data collected from the start and throughout the development and manufacturing cycle. Chemical engineers have routinely used computational tools for modeling, designing, optimizing, debottlenecking, and troubleshooting chemical processes. Herein, we present different applications of machine learning and multivariate statistics using industrial datasets. This dissertation also includes a deployed industrial solution to mitigate foaming in commercial fermentation reactors as a proof-of-concept (PoC). Our antifoam profiles are able to decrease volume loss by over 53% for a specific fermenter. Throughout this dissertation, we demonstrate applications of several techniques like ensemble methods, automated machine learning, exploratory time series, and deep learning for solving industrial problems. Our aim is to bridge the gap from industrial data acquisition to finding meaningful insights for process optimization.
799

Weakly Supervised Machine Learning for Cyberbullying Detection

Raisi, Elaheh 23 April 2019 (has links)
The advent of social media has revolutionized human communication, significantly improving individuals' lives. It makes people closer to each other, provides access to enormous real-time information, and eases marketing and business. Despite its uncountable benefits, however, we must consider some of its negative implications such as online harassment and cyberbullying. Cyberbullying is becoming a serious, large-scale problem damaging people's online lives. This phenomenon is creating a need for automated, data-driven techniques for analyzing and detecting such behaviors. In this research, we aim to address the computational challenges associated with harassment-based cyberbullying detection in social media by developing machine-learning framework that only requires weak supervision. We propose a general framework that trains an ensemble of two learners in which each learner looks at the problem from a different perspective. One learner identifies bullying incidents by examining the language content in the message; another learner considers the social structure to discover bullying. Each learner is using different body of information, and the individual learner co-train one another to come to an agreement about the bullying concept. The models estimate whether each social interaction is bullying by optimizing an objective function that maximizes the consistency between these detectors. We first developed a model we referred to as participant-vocabulary consistency, which is an ensemble of two linear language-based and user-based models. The model is trained by providing a set of seed key-phrases that are indicative of bullying language. The results were promising, demonstrating its effectiveness and usefulness in recovering known bullying words, recognizing new bullying words, and discovering users involved in cyberbullying. We have extended this co-trained ensemble approach with two complementary goals: (1) using nonlinear embeddings as model families, (2) building a fair language-based detector. For the first goal, we incorporated the efficacy of distributed representations of words and nodes such as deep, nonlinear models. We represent words and users as low-dimensional vectors of real numbers as the input to language-based and user-based classifiers, respectively. The models are trained by optimizing an objective function that balances a co-training loss with a weak-supervision loss. Our experiments on Twitter, Ask.fm, and Instagram data show that deep ensembles outperform non-deep methods for weakly supervised harassment detection. For the second goal, we geared this research toward a very important topic in any online automated harassment detection: fairness against particular targeted groups including race, gender, religion, and sexual orientations. Our goal is to decrease the sensitivity of models to language describing particular social groups. We encourage the learning algorithm to avoid discrimination in the predictions by adding an unfairness penalty term to the objective function. We quantitatively and qualitatively evaluate the effectiveness of our proposed general framework on synthetic data and data from Twitter using post-hoc, crowdsourced annotation. In summary, this dissertation introduces a weakly supervised machine learning framework for harassment-based cyberbullying detection using both messages and user roles in social media. / Doctor of Philosophy / Social media has become an inevitable part of individuals social and business lives. Its benefits, however, come with various negative consequences such as online harassment, cyberbullying, hate speech, and online trolling especially among the younger population. According to the American Academy of Child and Adolescent Psychiatry,1 victims of bullying can suffer interference to social and emotional development and even be drawn to extreme behavior such as attempted suicide. Any widespread bullying enabled by technology represents a serious social health threat. In this research, we develop automated, data-driven methods for harassment-based cyberbullying detection. The availability of tools such as these can enable technologies that reduce the harm and toxicity created by these detrimental behaviors. Our general framework is based on consistency of two detectors that co-train one another. One learner identifies bullying incidents by examining the language content in the message; another learner considers social structure to discover bullying. When designing the general framework, we address three tasks: First, we use machine learning with weak supervision, which significantly alleviates the need for human experts to perform tedious data annotation. Second, we incorporate the efficacy of distributed representations of words and nodes such as deep, nonlinear models in the framework to improve the predictive power of models. Finally, we decrease the sensitivity of the framework to language describing particular social groups including race, gender, religion, and sexual orientation. This research represents important steps toward improving technological capability for automatic cyberbullying detection.
800

Ensemble Learning Techniques for Structured and Unstructured Data

King, Michael Allen 01 April 2015 (has links)
This research provides an integrated approach of applying innovative ensemble learning techniques that has the potential to increase the overall accuracy of classification models. Actual structured and unstructured data sets from industry are utilized during the research process, analysis and subsequent model evaluations. The first research section addresses the consumer demand forecasting and daily capacity management requirements of a nationally recognized alpine ski resort in the state of Utah, in the United States of America. A basic econometric model is developed and three classic predictive models evaluated the effectiveness. These predictive models were subsequently used as input for four ensemble modeling techniques. Ensemble learning techniques are shown to be effective. The second research section discusses the opportunities and challenges faced by a leading firm providing sponsored search marketing services. The goal for sponsored search marketing campaigns is to create advertising campaigns that better attract and motivate a target market to purchase. This research develops a method for classifying profitable campaigns and maximizing overall campaign portfolio profits. Four traditional classifiers are utilized, along with four ensemble learning techniques, to build classifier models to identify profitable pay-per-click campaigns. A MetaCost ensemble configuration, having the ability to integrate unequal classification cost, produced the highest campaign portfolio profit. The third research section addresses the management challenges of online consumer reviews encountered by service industries and addresses how these textual reviews can be used for service improvements. A service improvement framework is introduced that integrates traditional text mining techniques and second order feature derivation with ensemble learning techniques. The concept of GLOW and SMOKE words is introduced and is shown to be an objective text analytic source of service defects or service accolades. / Ph. D.

Page generated in 0.0449 seconds