• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 23
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 472
  • 472
  • 115
  • 98
  • 97
  • 86
  • 67
  • 61
  • 61
  • 54
  • 48
  • 46
  • 46
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Data-Driven Decision-Making in Urban Schools That Transitioned From Focus or Priority to Good Standing

Ware, Danielle 01 January 2018 (has links)
Despite the importance an urban school district places on data-driven decision-making (DDDM) to drive instruction, implementation continues to remain a challenge. The purpose of this study was to investigate how support systems affected the implementation of DDDM to drive instructional practices in three urban schools that recently transitioned from priority or focus to good standing on the State Accountability Report. The study aligned with the organizational supports conceptual framework with an emphasis on data accessibility, collection methods, reliability and validity, the use of coaches and data teams, professional development, and data-driven leaders. Through the collection of qualitative data from one-on-one interviews, the research questions asked about the perspectives on data culture and data driven instructional practices of three school leaders and nine teachers. The data were triangulated to generate a thematic illustration of content that was coded and analyzed to identify solid patterns and themes. Findings suggest that leaders create a data-driven school culture by establishing a school-wide vision, developing a DDDM cycle, creating a collaborative DDDM support system, communicating data as a school community, and changing the way technology is used in DDDM initiatives. Based on the findings, a project in the form of a white paper was developed, using research to support that when data is regularly used to hone student skills, a positive shift in overall teacher practices occurs. This shift provides the potential for positive social change when students have opportunities to attain academic goals, resulting in increased student achievement and higher graduation rates.
62

A novel approach for the improvement of error traceability and data-driven quality predictions in spindle units

Rangaraju, Adithya January 2021 (has links)
The lack of research on the impact of component degradation on the surface quality of machine tool spindles is limited and the primary motivation for this research. It is common in the manufacturing industry to replace components even if they still have some Remaining Useful Life (RUL), resulting in an ineffective maintenance strategy. The primary objective of this thesis is to design and construct an Exchangeable Spindle Unit (ESU) test stand that aims at capturing the influence of the failure transition of components during machining and its effects on the quality of the surface. Current machine tools cannot be tested with extreme component degradation, especially the spindle, since the degrading elements can lead to permanent damage, and machine tools are expensive to repair. The ESU substitutes and decouples the machine tool spindle to investigate the influence of deteriorated components on the response so that the machine tool spindle does not take the degrading effects. Data-driven quality control is another essential factor which many industries try to implement in their production line. In a traditional manufacturing scenario, quality inspections are performed to check if the parameters measured are within the nominal standards at the end of a production line or between processes. A significant flaw in the traditional approach is its inability to map the degradation of components to quality. Condition monitoring techniques can resolve this problem and help identify defects early in production. This research focuses on two objectives. The first one aims at capturing the component degradation by artificially inducing imbalance into the ESU shaft and capturing the excitation behavior during machining with an end mill tool. Imbalance effects are quantified by adding mass onto the ESU spindle shaft. The varying effects of the mass are captured and characterized using vibration signals. The second objective is to establish a correlation between the surface quality of the machined part with the characterized vibrations signals by Bagged Ensemble Tree (BET) machine learning models. The results show a good correlation between the surface roughness and the accelerometer signals. A comparison study between a balanced and imbalanced spindle along with its resultant surface quality is presented in this research. / Bristen på forskning om inverkan av komponentnedbrytning på ytkvaliteten hos verktygsmaskiner är begränsad och den primära motivationen för denna forskning. Det är vanligt inom tillverkningsindustrin att byta ut komponenter även om de fortfarande har en viss återstående livslängd, vilket resulterar i en ineffektiv underhållsstrategi. Det primära syftet med denna avhandling är att designa och konstruera en utbytbar spindelenhetstestsats som syftar till att fånga inverkan av komponentbrottsövergång under bearbetning och dess effekter på ytkvaliteten. Nuvarande verktygsmaskiner kan inte testas med extrem komponentnedbrytning, speciellt spindeln, eftersom de nedbrytande elementen kan leda till permanenta skador och verktygsmaskiner är dyra att reparera. Den utbytbara spindelenheten ersätter och kopplar bort verktygsmaskinens spindel för att undersöka effekten av försämrade komponenter på responsen så att verktygsmaskinens spindel inte absorberar de nedbrytande effekterna. Datadriven kvalitetskontroll är en annan viktig faktor som många industrier försöker implementera i sin produktionslinje. I ett traditionellt tillverkningsscenario utförs kvalitetsinspektioner för att kontrollera om de uppmätta parametrarna ligger inom de nominella normerna i slutet av en produktionslinje eller mellan processer. En betydande brist med det traditionella tillvägagångssättet är dess oförmåga att kartlägga komponenternas försämring till kvalitet. Tillståndsövervakningstekniker kan lösa detta problem och hjälpa till att identifiera defekter tidigt i produktionsprocessen. Denna forskning fokuserar på två mål. Den första syftar till att fånga komponentnedbrytning genom att artificiellt inducera obalans i axeln på den utbytbara spindelenheten och fånga excitationsbeteendet under bearbetning med ett fräsverktyg. Obalanseffekter kvantifieras genom att tillföra massa till spindelaxeln på den utbytbara spindelenheten. Massans varierande effekter fångas upp och karakteriseras med hjälp av vibrationssignaler. Det andra målet är att etablera en korrelation mellan ytkvaliteten hos den bearbetade delen med de karakteriserade vibrationssignalerna från Bagged Ensemble Tree maskininlärningsmodeller. Resultaten visar en god korrelation mellan ytjämnheten och accelerometerns signaler. En jämförande studie mellan en balanserad och obalanserad spindel tillsammans med dess resulterande ytkvalitet presenteras i denna forskning.
63

Narrative to Action in the Creation and Performance of Music with Data-driven Instruments

Wang, Chi 06 1900 (has links)
The seven compositions that comprise this dissertation are represented by the following files: text file (pdf), seven video performances (mp4), and corresponding zipped files of custom software and affiliated files (various file types). / This Digital Portfolio Dissertation centers on a collection of seven digital videos of performances of original electroacoustic compositions that feature data-driven instruments. The dissertation also includes a copy of the original software and affiliated files used in performing the portfolio of music, and a text document that analyzes and describes the following for each of the seven compositions: (1) the design and implementation of each of the seven complete data-driven instruments; (2) the musical challenges and opportunities provided by data-driven instruments; (3) the performance techniques employed; (4) the compositional structure; (5) the sound synthesis techniques used, and (6) the data-mapping strategies used. The seven compositions demonstrate a variety of electroacoustic and performance techniques and employ a range of interface devices as front-ends to the data-driven instruments. The seven interfaces that I chose to use for my compositions include the Wacom Tablet, the Leap Motion device for hand and finger detection, the Blue Air infrared sensor device for distance measurements, the Nintendo Wii Remote wireless game controller, the Gametrak three-dimensional, position tracking system, the eMotion™ Wireless Sensor System, and a custom sensor-based interface that I designed and fabricated. The title of this dissertation derives from the extra-musical impulses that drove the creative impulses of the seven original electroacoustic compositions for data-driven instruments. Of the seven compositions, six of the pieces have connections to literature. Despite the fact there is a literary sheen to these musical works, the primary impulses of these compositions arise from the notion of absolute music – music for music’s sake, music that is focused on sound and the emotional and intellectual stimulus such sound can produce when humans experience it. Thus, I simultaneously work both sides of the musical street with my compositions containing both extra-musical and absolute musical substance.
64

The Major Challenges in DDDM Implementation: A Single-Case Study : What are the Main Challenges for Business-to-Business MNCs to Implement a Data-Driven Decision-Making Strategy?

Varvne, Matilda, Cederholm, Simon, Medbo, Anton January 2020 (has links)
Over the past years, the value of data and DDDM have increased significantly as technological advancements have made it possible to store and analyze large amounts of data at a reasonable cost. This has resulted in completely new business models that has disrupt whole industries. DDDM allows businesses to rely their decisions on data, as opposed to on gut feeling. Up until this point, literature is eligible to provide a general view of what are the major challenges corporations encounter when implementing a DDDM strategy. However, as the field is still rather new, the challenges identified are yet very general and many corporations, especially B2B MNCs selling consumer goods, seem to struggle with this implementation. Hence, a single-case study on such a corporation, named Alpha, was carried out with the purpose to explore what are their major challenges in this process. Semi-structured interviews revealed evidence of four major findings, whereas, execution and organizational culture were supported in existing literature, however, two additional findings associated with organizational structure and consumer behavior data were discovered in the case of Alpha. Based on this, the conclusions drawn were that B2B MNCs selling consumer goods encounter the challenges of identifying local markets as frontrunners for strategies such as the one to become more data-driven, as well as the need to find a way to retrieve consumer behavior data. With these two main challenges identified, it can provide a starting point for managers when implementing DDDM strategies in B2B MNCs selling consumer goods in the future.
65

Efficient learning on high-dimensional operational data

Zhang, Hongyi January 2019 (has links)
In a networked system, operational data collected by sensors or extracted from system logs can be used for target performance prediction, anomaly detection, etc. However, the number of metrics collected from a networked system is very large and usually can reach about 106 for a medium-sized system. This project aims to analyze and compare different unsupervised machine learning methods such as Unsupervised Feature Selection, Principle Component Analysis, Autoencoder, which can lead to efficient learning from high-dimensional data. The objective is to reduce the dimensionality of the input space while maintaining the prediction performance when compared with the learning on the full feature space. The data used in this project is collected from a KTH testbed which runs a Video-on-Demand service and a Key-Value store under different types of traffic load. The findings confirm the manifold hypothesis, which states that real-world high-dimensional data lie on lowdimensional manifolds embedded within the high-dimensional space. In addition, this project investigates data visualization of infrastructure measurements through two-dimensional plots. The results show that we can achieve data separation by using different mapping methods. / I ett nätverkssystem kan driftsdata som samlats in av sensorer eller extraherats från systemloggar användas för att förutsäga målprestanda, anomalidetektering etc. Antalet mätvärden som samlats in från ett nätverkssystem är dock mycket stort och kan vanligtvis uppgå till cirka 106 för ett medelstort system. Projektet syftar till att analysera och jämföra olika oövervakade metoder för maskininlärning, till exempel Oövervakad funktionsval, analys av huvudkomponent, autokodare, vilket kan leda till effektivt lärande av högdimensionell data. Målet är att minska ingångsutrymmet och samtidigt bibehålla prediktionsprestanda jämfört med inlärningen på hela funktionen. Uppgifterna som används i detta projekt samlas in från en KTH-testbädd som driver en Video-on-Demand-tjänst och en Key-Value-butik under olika typer av trafikbelastning. Resultaten bekräftar mångfaldshypotesen, som säger att verkliga högdimensionella data ligger på lågdimensionella grenrören inbäddade i det högdimensionella rymden. Dessutom undersöker detta projekt datavisualisering av infrastrukturmätningar genom tvådimensionella tomter. Resultaten visar att vi kan uppnå dataseparering genom att använda olika kartläggningsmetoder.
66

Aligning Data-Driven Decision-Making and Knowledge Management in High-Security Environments / Samordning av datadrivet beslutsfattande och kunskapshantering i högsäkerhetsmiljöer

Holma, Hampus, Jönsson, Hugo January 2024 (has links)
This thesis explores the implementation and improvement of data-driven processes within a Swedish industrial organization, specifically focusing on long-term maintenance planning in the energy sector. Despite the recognized benefits of data-driven decision-making, many organizations, including those in the energy sector, struggle to fully adopt this approach due to challenges such as organizational culture, knowledge management, and lack of top management support. This study addresses these challenges by investigating how data is currently utilized within a Swedish energy producer, identifying barriers to effective data use, and exploring the role of knowledge sharing in enhancing data-driven practices. Employing theoretical models such as Nonaka’s SECI model and Gökalp et. al. (2020) data analytics capability process maturity level, the research highlights that while some data-driven strategies are in place, there is a need for more standardized processes and greater involvement from top management. The study reveals significant impacts of knowledge sharing on data utilization, identifying barriers such as lack of training, scheduling conflicts, and physical and informational silos due to high-security requirements. Furthermore, it examines the gap between data availability and its utilization, attributing it to factors like the complexity of information systems, perceived data quality issues, and insufficient involvement of knowledgeable personnel. The findings suggest that addressing these issues through improved training, streamlined data systems, and strategic management of high-security constraints can enhance the overall effectiveness of data-driven decision-making. By fostering a data-driven culture and enhancing knowledge sharing- practices, the organization can better leverage its data assets, ultimately improving maintenance planning and operational efficiency in a high-security, regulated environment. / Detta arbete undersöker implementeringen och förbättringen av datadrivna processer inom en svensk industriell organisation, med särskilt fokus på långsiktig underhållsplanering i energisektorn. Trots fördelarna med datadrivet beslutsfattande, kämpar många organisationer, inklusive de inom energisektorn, med att fullt ut anta detta tillvägagångssätt på grund av utmaningar såsom organisationskultur, kunskapshantering och brist på stöd från ledningen. Denna studie tar itu med dessa utmaningar genom att undersöka hur data för närvarande används inom en svensk energiproducent, identifiera hinder för effektiv dataanvändning och utforska kunskapsdelningens roll i att förbättra datadrivna metoder. Genom att använda teoretiska modeller som Nonakas SECI-modell och Gökalp et. al. (2020) mognadsnivå för dataanalytisk förmåga, belyser studien att även om vissa datadrivna strategier är på plats, finns ett behov av mer standardiserade processer och större engagemang från ledningen. Studien visar betydande effekter av kunskapsdelning på datanyttjande, och identifierar hinder som brist på utbildning, schemakonflikter samt fysiska och informationsmässiga silos på grund av höga säkerhetskrav. Vidare undersöker den gapet mellan tillgänglighet och utnyttjande av data, vilket tillskrivs faktorer som komplexiteten i informationssystem, upplevda datakvalitetsproblem och otillräckligt inkludering av kunnig personal. Resultaten tyder på att genom att ta itu med dessa problem genom förbättrad utbildning, strömlinjeformade datasystem och strategisk hantering av höga säkerhetskrav kan den övergripande effektiviteten av datadrivet beslutsfattande förbättras. Genom att främja en datadriven kultur och förbättra kunskapsdelningspraxis kan organisationen bättre utnyttja sina dataresurser, vilket i slutändan förbättrar underhållsplanering och operationell effektivitet i en högsäkerhetsmiljö.
67

A Multi-Site Case Study: Acculturating Middle Schools to Use Data-Driven Instruction for Improved Student Achievement

James, Rebecca C. 05 January 2011 (has links)
In the modern era of high-stakes accountability, test data have become much more than a simple comparison (Schmoker, 2006; Payne & Miller, 2009). The information provided in modern data reports has become an invaluable tool to drive instruction in classrooms. However, there is a lack of good training for educators to evaluate data and translate findings into solid practices that can improve student learning (Blair, 2006; Dynarski, 2008; Light, Wexler, & Heinze, 2005; Payne & Miller, 2009). Some schools are good at collecting data, but often fall short at what to do next. It is the role of the principal to serve as an instructional leader and guide teachers to the answer the reoccurring question of "now what?" The purpose of this study was to investigate ways in which principals build successful data-driven instructional systems within their schools using a qualitative multi-site case study method. This research utilized a triangulation approach with structured interviews, on-site visits, and document reviews from various middle school supervisors, principals, and teachers. The findings are presented in four common themes and patterns identified as essential components administrators used to implement data-driven instructional systems to improve student achievement. The themes are 1) administrators must clearly define the vision and set the expectation of using data to improve student achievement, 2) administrators must take an active role in the data-driven process, 3) data must be easily accessible to stakeholders, and 4) stakeholders must devote time on a regular basis to the data-driven process. The four themes led to the conclusion of ten common steps administrators can use to acculturate their school or school division with the data-driven instruction process. / Ed. D.
68

Data-Driven Diagnosis For Fuel Injectors Of Diesel Engines In Heavy-Duty Trucks

Eriksson, Felix, Björkkvist, Emely January 2024 (has links)
The diesel engine in heavy-duty trucks is a complex system with many components working together, and a malfunction in any of these components can impact engine performance and result in increased emissions. Fault detection and diagnosis have therefore become essential in modern vehicles, ensuring optimal performance and compliance with progressively stricter legal requirements. One of the most common faults in a diesel engineis faulty injectors, which can lead to fluctuations in the amount of fuel injected. Detecting these issues is crucial, prompting a growing interest in exploring additional signals beyond the currently used signal to enhance the performance and robustness of diagnosing this fault. In this work, an investigation was conducted to identify signals that correlate with faulty injectors causing over- and underfueling. It was found that the NOx, O2, and exhaust pressure signals are sensitive to this fault and could potentially serve as additional diagnostic signals. With these signals, two different diagnostic methods were evaluated to assess their effectiveness in detecting injector faults. The methods evaluated were data-driven residuals and Random Forest classifier. The data-driven residuals, when combined with the CUSUM algorithm, demonstrated promising results in detecting faulty injectors. The O2 signal proved effective in identifying both fault instances, while NOx and exhaust pressure were more effective at detecting overfueling. The Random Forest classifier also showed good performance in detecting both over- and underfueling. However, it was observed that using a classifier requires more extensive data preprocessing. Two preprocessing methods were employed: integrating previous measurements and calculating statistical measures over a defined time span. Both methods showed promising results, with the latter proving to be the better choice. Additionally, the generalization capabilities of these methods across different operating conditions were evaluated. It was demonstrated thatthe data-driven residuals yielded better results compared to the classifier, which requiredtraining on new cases to perform effectively.
69

Data driven modelling for environmental water management

Syed, Mofazzal January 2007 (has links)
Management of water quality is generally based on physically-based equations or hypotheses describing the behaviour of water bodies. In recent years models built on the basis of the availability of larger amounts of collected data are gaining popularity. This modelling approach can be called data driven modelling. Observational data represent specific knowledge, whereas a hypothesis represents a generalization of this knowledge that implies and characterizes all such observational data. Traditionally deterministic numerical models have been used for predicting flow and water quality processes in inland and coastal basins. These models generally take a long time to run and cannot be used as on-line decision support tools, thereby enabling imminent threats to public health risk and flooding etc. to be predicted. In contrast, Data driven models are data intensive and there are some limitations in this approach. The extrapolation capability of data driven methods are a matter of conjecture. Furthermore, the extensive data required for building a data driven model can be time and resource consuming or for the case predicting the impact of a future development then the data is unlikely to exist. The main objective of the study was to develop an integrated approach for rapid prediction of bathing water quality in estuarine and coastal waters. Faecal Coliforms (FC) were used as a water quality indicator and two of the most popular data mining techniques, namely, Genetic Programming (GP) and Artificial Neural Networks (ANNs) were used to predict the FC levels in a pilot basin. In order to provide enough data for training and testing the neural networks, a calibrated hydrodynamic and water quality model was used to generate input data for the neural networks. A novel non-linear data analysis technique, called the Gamma Test, was used to determine the data noise level and the number of data points required for developing smooth neural network models. Details are given of the data driven models, numerical models and the Gamma Test. Details are also given of a series experiments being undertaken to test data driven model performance for a different number of input parameters and time lags. The response time of the receiving water quality to the input boundary conditions obtained from the hydrodynamic model has been shown to be a useful knowledge for developing accurate and efficient neural networks. It is known that a natural phenomenon like bacterial decay is affected by a whole host of parameters which can not be captured accurately using solely the deterministic models. Therefore, the data-driven approach has been investigated using field survey data collected in Cardiff Bay to investigate the relationship between bacterial decay and other parameters. Both of the GP and ANN models gave similar, if not better, predictions of the field data in comparison with the deterministic model, with the added benefit of almost instant prediction of the bacterial levels for this recreational water body. The models have also been investigated using idealised and controlled laboratory data for the velocity distributions along compound channel reaches with idealised rods have located on the floodplain to replicate large vegetation (such as mangrove trees).
70

DDD metodologija paremto projektavimo įrankio kodo generatoriaus kūrimas ir tyrimas / DDD methodology based design tool‘s code generator development and research

Valinčius, Kęstutis 13 August 2010 (has links)
Data Driven Design metodologija plačiai naudojama įvairiose programinėse sistemose. Šios metodologijos tikslas – atskirti bei lygiagretinti programuotojų ir projektuotojų veiklą. Sistemos branduolio funkcionalumas yra įgyvendinamas sąsajomis, o dinamika – scenarijų pagalba. Taip įvedamas abstrakcijos lygmuo, kurio dėka programinis produktas tampa lankstesnis, paprasčiau palaikomas ir tobulinamas, be to šiuos veiksmus galima atlikti lygiagrečiai. Darbo tikslas buvo sukurti automatinį kodo generatorių, kuris transformuotų grafiškai sumodeliuotą scenarijų į programinį kodą. Generuojant programinį kodą automatiškai ženkliai sumažėja sintaksinių bei loginių klaidų tikimybė, viskas priklauso nuo sumodeliuoto scenarijaus. Kodas sugeneruojamas labai greitai ir visiškai nereikalingas programuotojo įsikišimas. Šis tikslas pasiektas iškėlus biznio logikos projektavimą į scenarijaus projektavimą, o kodo generavimo posistemę realizavus žiniatinklio paslaugos principu. Kodas generuojamas neprisirišant prie konkrečios architektūros, technologijos ar taikymo srities panaudojant įskiepių sistemą . Grafiniame scenarijų kūrimo įrankyje sumodeliuojamas scenarijus ir tada transformuojamas į metakalbą , iš kurios ir generuojamas galutinis programinis kodas. Metakalba – tam tikromis taisyklėmis apibrėžta „XML “ kalba. Realizavus eksperimentinę sistemą su didelėmis problemomis nebuvo susidurta. Naujos sistemos modeliavimas projektavimo įrankiu paspartino kūrimo procesą septynis kartus. Tai įrodo... [toliau žr. visą tekstą] / Data Driven Design methodology is widely used in various program systems. This methodology aim is to distinguish and parallel software developer and scenario designer’s work. Core functionality is implemented via interfaces and dynamics via scenario support. This introduces a level of abstraction, which makes software product more flexible easily maintained and improved, in addition these actions can be performed in parallel. The main aim of this work was to create automatic code generator that transforms graphically modeled scenario to software code. Automatically generated software code restricts probability of syntactic and logical errors, all depends on scenario modeling. Code is generated instantly and no need software developer interference. This aim is achieved by moving business logic designing to scenario designing process and code generator service making as a “Web service”. Using cartridge based system code is generated not attached to a specific architecture, technology or application domain. In graphical scenario modeling tool scenario is modeled and transformed to metalanguage, from which software code is generated. Metalanguage – with specific rules defined “XML” language. Experimental system was developed with no major problems. New project modeling with our modeling tool speeded the development process by seven times. This proves modeling tool advantage over manual programming.

Page generated in 0.059 seconds