• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 486
  • 486
  • 120
  • 103
  • 99
  • 88
  • 69
  • 65
  • 62
  • 56
  • 51
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Improving processor power demand comprehension in data-driven power and software phase classification and prediction

Khoshbakht, Saman 14 August 2018 (has links)
The single-core performance trend predicted by Moore's law has been impeded in recent years partly due to the limitations imposed by increasing processor power demands. One way to mitigate this limitation in performance improvement is the introduction of multi-core and multi-processor computation. Another approach to increasing the performance-per-Watt metric is to utilize the processor's power more efficiently. In a single-core system, the processor cannot sustainably dissipate more than the nominal Thermal Design Power (TDP) limit determined for the processor at design time. Therefore it is important to understand and manage the power demands of the processes being executed. This principle also applies to multi-core and multi-processor environments. In a multi-processor environment, knowing the power demands of the workload, the power management unit can schedule the workload to a processor based on the state of each processor and process in the most efficient way. This is an example of the knapsack problem. Another approach, also applicable to multi-cores, could be to reduce the core's power by reducing its working voltage and frequency, leading to mitigation of the power bursts, lending more headroom to other cores, and keeping the total power under the TDP limit. The information collected from the execution of the software running on the processor (i.e. the workload) is the key to determining the actions needed with regards to power management at any given time. This work comprises two different approaches in improving the comprehension of software power demands as it executes on the processor. In the first part of this work, the effects of software data on power is analysed. It is important to be able to model the power based on the instructions it comprises, however, to the best of our knowledge, no work exists in which the effects of the values being processed has been investigated with regards to processor power. Creating a power model capable of accurately reflecting the power demands of the software at any given time is a problem addressed by previous research. The software power model can be used in processor simulation environments as well as in the processor itself to create an estimated power dissipation without the need to physically measure the power. In the first part of this research, the effects of software data on power is investigated. In order to collect the data required as part of this research, a profiler tool has been developed by the author and used in this part of the research as well as the second part. The second part of this work focuses on the development of processor power throughout time during the execution of the software. Understanding the power demands of the processor at any given time is important to maintain and manage processor power. Additionally, acquiring an insight into the future power demands of the software can help the system with scheduling planning ahead of time, in order to prepare for any high-power section of the code as well as to plan to use the available power headroom as a result of an upcoming low-power section. In this part of our work, a new hierarchical approach to software phase classification is developed. Software phase classification problem focuses on determining the behaviour of the software at any given time slice by assigning the time slice to one of pre-determined software phases. Each phase is assumed to have known behaviour which was previously measured and instrumented based on previously observed instances of the phase, or by utilizing a model capable of estimating the behaviour of each phase. Using a two-tiered hierarchical clustering approach, our proposed phase classification methodology incorporates the recent performance behaviour of the software in order to determine the power phase. We focused on determining the power phase using the performance information because the real processor power is not usually available without the need for added hardware, while there exists a large number of different performance counters available on most modern processors. Additionally, based on our observations, the relation between performance phases and power behaviour is highly predictable. This method is shown to provide robust results with a low amount of noise compared to other methods, while providing a high enough timing accuracy for the processor to act on. To the best of our knowledge, no other existing work is able to provide both timing accuracy and reduced noise compared to our work. Software phase classification can be used to control the processor power based on the software's phase at any given time, but it does not provide future insight into the progression of the workload. Finally, we developed and compared several phase prediction methodologies based on phase precursors and phase locality concepts. Phase precursor-based methods rely on detecting the precursors observed before the software enters a certain phase, while phase locality methods rely on the locality principle, which postulates a high probability for the current software behaviour to be observed in the near-future. The phase classification, as well as phase prediction methodologies was shown to be able to reduce the power bursts within a workload in order to provide a more smooth power trace. As the bursts are removed from one workload's power trace, the multi-core processor power headroom can be confidently utilized for another process. / Graduate
112

Síntese de controladores ressonantes baseado em dados aplicado a fontes ininterruptas de energia

Schildt, Alessandro Nakoneczny January 2014 (has links)
Este trabalho trata da utilização de um método de sintonia de controladores baseado nos dados obtidos da planta. A proposta é a sintonia de controladores ressonantes para aplicação em inversores de frequência presentes em fontes ininterruptas de energia, com o intuito de seguimento de referência senoidal de tensão. Dentro deste contexto, será usado o algoritmo Virtual Reference Feedback Tuning, o qual é um método de identificação de controladores baseado em dados que não é iterativo e não necessita do modelo do sistema para identificar o controlador. A partir dos dados obtidos da planta e também da definição de um modelo de referência pelo projetista, o método estima os parâmetros de uma estrutura fixada previamente para o controlador através da minimização de uma função custo definida pelo erro entre a saída desejada e a saída real. Além disso, uma realimentação de corrente é necessária na malha de controle, onde seu ganho proporcional é definido por experimento empírico. Para demonstrar a utilização do método são apresentados resultados simulados e práticos de uma fonte ininterrupta de energia com potência de 5 kV A utilizando cargas lineares e não-lineares. É avaliado o desempenho do ponto de vista da qualidade do sinal de saída real obtido com controladores sintonizados a partir de diferentes modelos de referência, além do uso de sinais de excitação diversos para o algoritmo V RFT. Os resultados experimentais são obtidos em um inversor de frequência monofásico com uma plataforma em tempo real baseada na placa de aquisição de dados dSPACE DS1104. Os resultados mostram que, em relação as normas internacionais, o sistema de controle proposto possui bom comportamento para seguimento de referência, operando à vazio ou utilizando carga linear. / This work discusses about controller tuning methods based on plant data. The proposal is to tune resonant controllers for application to the frequency inverters found in uninterruptible power supplies, with the goal of following sinusoidal reference signals. Within this context, the Virtual Reference Feedback Tuning algorithm is used, which is a data-driven controller identification method that is not iterative and does not require a system model to identify the controller. Data obtained from the plant and also the definition of a reference model by the designer, are used by the method to estimate the parameters of a previously fixed controller structure through the minimization of a cost function, which is defined by the error between desired and actual outputs. Moreover, a current feedback is required in the control loop where the proportional gain is defined by empirical experiment. To demonstrate the method’s application, simulated and practical results of an uninterruptible power supply with capacity of the 5 kV A will be presented employing linear and nonlinear loads. Evaluates the performance in terms of system’s actual output quality, obtained with controllers tuned with different reference models. Distinct excitation signals are also used to feed the VRFT algorithm. The experimental results achieved from use of an single-phase inverter and a real-time platform based on data acquisition board dSPACE DS1104. The results show that, with respect to international standards, the proposed control system has good performance for tracking reference, operating at empty or using linear load.
113

Building Energy Modeling: A Data-Driven Approach

January 2016 (has links)
abstract: Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on off-line model is applied based on system identification and Kalman filtering methods. The developed data-driven modeling framework is validated on various genres of buildings, and the experimental results demonstrate desired performance on building energy forecasting in terms of accuracy and computational efficiency. The framework could be easily implemented into building energy model predictive control (MPC), demand response (DR) analysis and real-time operation decision support systems. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
114

Run-to-run modelling and control of batch processes

Duran Villalobos, Carlos Alberto January 2016 (has links)
The University of ManchesterCarlos Alberto Duran VillalobosDoctor of Philosophy in the Faculty of Engineering and Physical SciencesDecember 2015This thesis presents an innovative batch-to-batch optimisation technique that was able to improve the productivity of two benchmark fed-batch fermentation simulators: Saccharomyces cerevisiae and Penicillin production. In developing the proposed technique, several important challenges needed to be addressed:For example, the technique relied on the use of a linear Multiway Partial Least Squares (MPLS) model to adapt from one operating region to another as productivity increased to estimate the end-point quality of each batch accurately. The proposed optimisation technique utilises a Quadratic Programming (QP) formulation to calculate the Manipulated Variable Trajectory (MVT) from one batch to the next. The main advantage of the proposed optimisation technique compared with other approaches that have been published was the increase of yield and the reduction of convergence speed to obtain an optimal MVT. Validity Constraints were also included into the batch-to-batch optimisation to restrict the QP calculations to the space only described by useful predictions of the MPLS model. The results from experiments over the two simulators showed that the validity constraints slowed the rate of convergence of the optimisation technique and in some cases resulted in a slight reduction in final yield. However, the introduction of the validity constraints did improve the consistency of the batch optimisation. Another important contribution of this thesis were a series of experiments that were implemented utilising a variety of smoothing techniques used in MPLS modelling combined with the proposed batch-to-batch optimisation technique. From the results of these experiments, it was clear that the MPLS model prediction accuracy did not significantly improve using these smoothing techniques. However, the batch-to-batch optimisation technique did show improvements when filtering was implemented.
115

Síntese de controladores ressonantes baseado em dados aplicado a fontes ininterruptas de energia

Schildt, Alessandro Nakoneczny January 2014 (has links)
Este trabalho trata da utilização de um método de sintonia de controladores baseado nos dados obtidos da planta. A proposta é a sintonia de controladores ressonantes para aplicação em inversores de frequência presentes em fontes ininterruptas de energia, com o intuito de seguimento de referência senoidal de tensão. Dentro deste contexto, será usado o algoritmo Virtual Reference Feedback Tuning, o qual é um método de identificação de controladores baseado em dados que não é iterativo e não necessita do modelo do sistema para identificar o controlador. A partir dos dados obtidos da planta e também da definição de um modelo de referência pelo projetista, o método estima os parâmetros de uma estrutura fixada previamente para o controlador através da minimização de uma função custo definida pelo erro entre a saída desejada e a saída real. Além disso, uma realimentação de corrente é necessária na malha de controle, onde seu ganho proporcional é definido por experimento empírico. Para demonstrar a utilização do método são apresentados resultados simulados e práticos de uma fonte ininterrupta de energia com potência de 5 kV A utilizando cargas lineares e não-lineares. É avaliado o desempenho do ponto de vista da qualidade do sinal de saída real obtido com controladores sintonizados a partir de diferentes modelos de referência, além do uso de sinais de excitação diversos para o algoritmo V RFT. Os resultados experimentais são obtidos em um inversor de frequência monofásico com uma plataforma em tempo real baseada na placa de aquisição de dados dSPACE DS1104. Os resultados mostram que, em relação as normas internacionais, o sistema de controle proposto possui bom comportamento para seguimento de referência, operando à vazio ou utilizando carga linear. / This work discusses about controller tuning methods based on plant data. The proposal is to tune resonant controllers for application to the frequency inverters found in uninterruptible power supplies, with the goal of following sinusoidal reference signals. Within this context, the Virtual Reference Feedback Tuning algorithm is used, which is a data-driven controller identification method that is not iterative and does not require a system model to identify the controller. Data obtained from the plant and also the definition of a reference model by the designer, are used by the method to estimate the parameters of a previously fixed controller structure through the minimization of a cost function, which is defined by the error between desired and actual outputs. Moreover, a current feedback is required in the control loop where the proportional gain is defined by empirical experiment. To demonstrate the method’s application, simulated and practical results of an uninterruptible power supply with capacity of the 5 kV A will be presented employing linear and nonlinear loads. Evaluates the performance in terms of system’s actual output quality, obtained with controllers tuned with different reference models. Distinct excitation signals are also used to feed the VRFT algorithm. The experimental results achieved from use of an single-phase inverter and a real-time platform based on data acquisition board dSPACE DS1104. The results show that, with respect to international standards, the proposed control system has good performance for tracking reference, operating at empty or using linear load.
116

Curious Cuisine : Bringing culinary creativity home

Nacsa, Júlia January 2016 (has links)
How could culinary science and technology educate us about food through engagement and reflection? In this project, I proposed to uncover opportunities for design intervention within our near-future scenarios of cooking and eating in a home environment. My intent has been to use interaction design methodology to form social practices that turn the process of making and eating food more pleasurable and inspiring, while developing one’s individual knowledge, without being didactic and prescriptive. The hypothesis has been that culinary science simplified, combined with today’s data-driven technologies, have the potential to foster creativity and experimentation among hobby cooks. The aim has been to discover the consequences of cloud data and connected technologies on experimentation, which is inherently driven by human intuition. My approach has been to explore what behaviors such data-driven systems designed for eliciting creativity could possess, and what kind of inspiration the science of flavor could bring into everyday cooking. The result is a set of design principles for how creative cooking explorations can be fostered through tangible and embodied experiences. It is manifested in a concept that creates a ‘culinary safe zone’ by encouraging experimentation, presenting information on demand, but without overshadowing the cook’s intuition. The concept Curious Cuisine allows non-professional cooks to create their own unique dishes; to explore ingredient pairings, preparation techniques, and fine-tuning flavors.
117

Automated Data-Driven Hint Generation for Learning Programming

Rivers, Kelly 01 July 2017 (has links)
Feedback is an essential component of the learning process, but in fields like computer science, which have rapidly increasing class sizes, it can be difficult to provide feedback to students at scale. Intelligent tutoring systems can provide personalized feedback to students automatically, but they can take large amounts of time and expert knowledge to build, especially when determining how to give students hints. Data-driven approaches can be used to provide personalized next-step hints automatically and at scale, by mining previous students’ solutions. I have created ITAP, the Intelligent Teaching Assistant for Programming, which automatically generates next-step hints for students in basic Python programming assignments. ITAP is composed of three stages: canonicalization, where a student's code is transformed to an abstracted representation; path construction, where the closest correct state is identified and a series of edits to that goal state are generated; and reification, where the edits are transformed back into the student's original context. With these techniques, ITAP can generate next-step hints for 100% of student submissions, and can even chain these hints together to generate a worked example. Initial analysis showed that hints could be used in practice problems in a real classroom environment, but also demonstrated that students' relationships with hints and help-seeking were complex and required deeper investigation. In my thesis work, I surveyed and interviewed students about their experience with helpseeking and using feedback, and found that students wanted more detail in hints than was initially provided. To determine how hints should be structured, I ran a usability study with programmers at varying levels of knowledge, where I found that more novice students needed much higher levels of content and detail in hints than was traditionally given. I also found that examples were commonly used in the learning process, and could serve an integral role in the feedback provision process. I then ran a randomized control trial experiment to determine the effect of next-step hints on learning and time-on-task in a practice session, and found that having hints available resulted in students spending 13.7% less time during practice while achieving the same learning results as the control group. Finally, I used the data collected during these experiments to measure ITAP’s performance over time, and found that generated hints improved as data was added to the system. My dissertation has contributed to the fields of computer science education, learning science, human-computer interaction, and data-driven tutoring. In computer science education, I have created ITAP, which can serve as a practice resource for future programming students during learning. In the learning sciences, I have replicated the expertise reversal effect by finding that more expert programmers want less detail in hints than novice programmers; this finding is important as it implies that programming teachers may provide novices with less assistance than they need. I have contributed to the literature on human-computer interaction by identifying multiple possible representations of hint messages, and analyzing how users react to and learn from these different formats during program debugging. Finally, I have contributed to the new field of data-driven tutoring by establishing that it is possible to always provide students with next-step hints, even without a starting dataset beyond the instructor’s solution, and by demonstrating that those hints can be improved automatically over time.
118

Supporting product development with a tangible platform for simulating user scenarios

Ruvald, Ryan January 2017 (has links)
Motivation: Today’s sustainability challenges are increasingly being addressed by Product Service Systems to satisfy customers needs while lowering their overall environment impact. These systems are increasingly complex containing diverse artifacts and interactions. To provide a holistic solution centered on the human experience element, design of product-service systems are best driven by data gathered from design thinking methods.     Problem: When considering innovation challenges, such as the deployment of autonomous electric machines on future construction sites, data driven design can suffer from a lack of available tangible user feedback upon which to make design decisions.   Approach: In the case of this study, the development of a scaled down construction site structured around generally applicable operations was built as a prototype for involving various users in early phase development of a HMI for interacting with prototype machines built by Volvo CE called the HX01. Qualitative data acquisition methods were derived from Design Thinking approaches to needfinding including: a questionnaire, unstructured interviews and observations.   Results: The prototype scale site became a 5 meter x 5 meter semi-portable site with 1:11 scale ratio machines including: excavators, wheeled loaders and autonomous haulers. The product tested with the site was an augmented reality interface to provide a communication platform between workers and the autonomous haulers designed at building trust to enable collaboration. Test users and observers provided feedback confirming the effectiveness of the scale site scenario to convey the necessary context of a realistic interaction experience. Beyond HMI testing, the site served as a tangible artifact to instigate conversations across domain boundaries.   Conclusions: The tangible experiential scenario platform developed displayed the capability to go beyond one-way concept communication of concepts to customers, by including customers as integral participants in the testing of new products/services. For design teams, the site can facilitate deeper learnings and validation via a shared contextualization of user feedback. The further implications may also include: the ability to increase rationale at design decision gate’s assessment of risk in new products and enable the identification of emergent issues in complex future scenarios.
119

Dynamic Data-Driven Visual Surveillance of Human Crowds via Cooperative Unmanned Vehicles

Minaeian, Sara, Minaeian, Sara January 2017 (has links)
Visual surveillance of human crowds in a dynamic environment has attracted a great amount of computer vision research efforts in recent years. Moving object detection, which conventionally includes motion segmentation and optionally, object classification, is the first major task for any visual surveillance application. After detecting the targets, estimation of their geo-locations is needed to create the same reference coordinate system for them for higher-level decision-making. Depending on the required fidelity of decision, multi-target data association may be also needed at higher levels to differentiate multiple targets in a series of frames. Applying all these vision-based algorithms to a crowd surveillance system (a major application studied in this dissertation) using a team of cooperative unmanned vehicles (UVs), introduces new challenges to the problem. Since the visual sensors move with the UVs, and thus the targets and the environment are dynamic, it adds to the complexity and uncertainty of the video processing. Moreover, the limited onboard computation resources require more efficient algorithms to be proposed. Responding to these challenges, the goal of this dissertation is to design and develop an effective and efficient visual surveillance system based on dynamic data driven application system (DDDAS) paradigm to be used by the cooperative UVs for autonomous crowd control and border patrol. The proposed visual surveillance system includes different modules: 1) a motion detection module, in which a new method for detecting multiple moving objects, based on sliding window is proposed to segment the moving foreground using the moving camera onboard the unmanned aerial vehicle (UAV); 2) a target recognition module, in which a customized method based on histogram-of-oriented-gradients is applied to classify the human targets using the onboard camera of unmanned ground vehicle (UGV); 3) a target geo-localization module, in which a new moving-landmark-based method is proposed for estimating the geo-location of the detected crowd from the UAV, while a heuristic method based on triangulation is applied for geo-locating the detected individuals via the UGV; and 4) a multi-target data association module, in which the affinity score is dynamically adjusted to comply with the changing dispersion of the detected targets over successive frames. In this dissertation, a cooperative team of one UAV and multiple UGVs with onboard visual sensors is used to take advantage of the complementary characteristics (e.g. different fidelities and view perspectives) of these UVs for crowd surveillance. The DDDAS paradigm is also applied toward these vision-based modules, where the computational and instrumentation aspects of the application system are unified for more accurate or efficient analysis according to the scenario. To illustrate and demonstrate the proposed visual surveillance system, aerial and ground video sequences from the UVs, as well as simulation models are developed, and experiments are conducted using them. The experimental results on both developed videos and literature datasets reveal the effectiveness and efficiency of the proposed modules and their promising performance in the considered crowd surveillance application.
120

The use of data in social media marketing : An explorative study of data insights in social media marketing

Grönlund, Sophie, Schytt, Tommy January 2017 (has links)
The marketing possibilities on the Internet is growing and so are social media marketing. The budget devoted for marketing activities on social media is constantly increasining every year and the time users are spending on social media is also increasing. Among the increasing activities comes a vast amount of data which create endless of opportunities for companies to optimize their marketing activities. In marketing the most important has always been to know your customers and how to reach out to them. The Internet and data that comes with it has made it possible for companies to get to know their customers even better and to reach them with more precision if data is correctly used.   A gap was identified from the litterature search which showed that it is not always clear how to utilize social media for marketing and it is not easy to analyze and interpret the data derived from social media. This has lead to a lack of knowledge on how data can be used for social media activities. From the identified gap regarding data usage in social media marketing, a research question was formulated:   “How is data used in brand’s strategies for social media?”   A qualitative research design conducting semi-structured interviews was used to examine the research question. A purposeful sample of eleven respondents, defined as experts within the research field, from ten different companies was selected. A pilot study was carried out to get insights in the identified gap, to set a base for the theoretical framework, and to optimize the interview questions. All respondents represented agencies except for the respondent in the pilot study.   Academics and business communities are interested in how data is used in marketing purposes and therefore it was elaborated further in this thesis to how data can be used in social media activities. Branding activities are becoming more engaged with its customers, thus marketers need to keep up to date with the new and emerging trends. Furthermore, the aim was to explore how data is used in social media marketing and how data affect decisions in social media strategies.   The results found in this study shows that data is used to define audiences on social media and to enable a greater reach of the messages for the audiences. The audience is defined by data analysis mostly based on consumer behavior in social media. To achive reach marketers use programmatic buying tools, which are based on data and ultimatley enables conversions among the audience. Data is also analyzed by opinion mining where data insights can show what topics customers are engaged in. Data insights can further give direction on how content can encourage engagement among the targeted audience. Lastly, the result shows that it is important to have knowledge about how to analyze, interpret, and use data insights in order to create successful social media activites.

Page generated in 0.0659 seconds