• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 296
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 489
  • 489
  • 120
  • 106
  • 99
  • 88
  • 70
  • 67
  • 62
  • 56
  • 52
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

國民中學校長資訊使用環境對資料導向決策影響之研究:結構方程模式之應用 / Research on the influence of information use environments on principals' data-driven decision-making in junior high schools: an application of structural equation modeling

何奇南, Ho, Chi Nan Unknown Date (has links)
本研究旨在探討國民中學校長資訊使用環境對資料導向決策之影響,並了解校長個人背景是否分別對其資訊使用環境及資料導向決策之實踐造成差異,並根據研究結論,提出建議供有關單位參考。為達上述研究目的,本研究採用問卷調查法,針對臺灣地區北部與中部九縣市(宜蘭縣、基隆市、臺北市、新北市、桃園縣、新竹縣、新竹市、苗栗縣、臺中市)公立國民中學校長為研究對象,採普查方式進行。共寄發問卷357份,回收有效問卷292份,有效回收率為 81.8%。本研究採用 SPSS 17.0 for Windows和AMOS 7.0統計套裝軟體進行分析,獲得以下結論: 一、臺灣地區北部與中部九縣市公立國民中學校長資訊使用環境的運作情形與資料導向決策的運用頻率,均為中高程度。 二、不同性別、主任經歷、教育程度、學校規模、學校歷史、學校地區之校長在資訊使用環境的運作情形上,沒有顯著差異。 三、不同主任經歷、年齡、校長年資、學校歷史、學校地區之校長在資料導向決策的運用頻率上,沒有顯著差異。 四、不同性別、教育程度之校長在合作夥伴關係與區域政治領導此一構面的知覺上有差異。 五、不同年齡、教育程度、校長年資、學校規模之校長在資料分析技巧此一構面的知覺上有差異。 六、不同學校規模之校長在學校願景領導、學校教學領導此二構面的知覺上有差異。 七、本研究建構之模式經過結構方程模式檢定獲得支持,校長資訊環境對資料導向決策具有正向顯著的影響。 最後依據上述研究結論,提出具體建議,以做為教育行政機關、國民中學學校行政參考運用。 / This study aimed to explore the current situation of the junior high schools’ principals’ influence of information use environments on data-driven decision-making, to analyze the perception of information use environments and data-driven decision-making, to analyze different background variables and school variables’ on the different perception of information use environments on data-driven decision-making, and to discuss the effect of information use environments on data-driven decision-making. Based on the research conclusions, the specific recommendations were proposed for the relevant institutions for references. To achieve these purposes, this study used questionnaire survey method. The principals of public junior high schools in the middle and the northern Taiwan, including Ilan County, Keelung County, Taipei City, New Taipei City, Taoyung County, Hsinchu County , Hsinchu City, Maoli County,and Taichung City, as the study objects. A total of 357 questionnaires were distributed, with 292 valid questionnaires. The effective rate was 81.8%. In this study, SPSS 17.0 for Windows and AMOS 7.0 statistical software were used for analysis and obtained the following conclusions: 1. The rate of the operation of information use environments and data-driven decision-making is high. 2. There is no significant difference in different gender, the directors’ experience, education level, school size, school history and school location in principals’ information use environments. 3. There is no significant difference in different ages, directors’ experience, principal serving years, school history and school location in principals’ data-driven decision-making. 4. There are significant differences in different gender and education level in the perception of principals’ leadership in collaborative partnerships and larger-context politics. 5. There are significant differences in different age, education level, principal severing year and school size in data analysis skills. 6. There are significant differences in different school size in principal’s leadership in school vision and leadership in school instruction. 7. The model constructed in this study was supported through structural equation modeling test. Information use environments of principals have a significant positive impact on data-driven decision- making. Finally, based on the above research conclusions, specific recommendations were proposed as references and applications for educational administration and junior high school administration.
232

臺北市國民中學行政人員資訊使用環境對資料導向決策影響之研究 / Research on the Influence of Information Use Environment on Administrators’ Data-driven Decision-making in Junior High Schools in Taipei City

林仕崇, Lin, Shih Tsung Unknown Date (has links)
本研究旨在探討臺北市國民中學行政人員資訊使用環境與資料導向決策的現況,分析在不同個人背景變項與學校環境變項下行政人員知覺資訊使用環境與資料導向決策的差異情形,並探討資訊使用環境對資料導向決策的關係,根據研究結果,提出建議供有關單位參考。 為達上述研究目的,本研究採用問卷調查法,以臺北市37所公立國民中學之行政人員(主任、組長)為研究對象,共寄發問卷471份,回收有效問卷420份,有效回收率為89.1%。本研究採用SPSS 17.0 for Windows和LISREL 8.80統計套裝軟體進行分析,獲得以下結論: 一、臺北市國民中學行政人員知覺資訊使用環境與資料導向決策為中高程度。 二、男性行政人員知覺資訊使用環境與資料導向決策均高於女性行政人員。 三、不同年齡、服務年資、行政年資之行政人員在知覺資訊使用環境與資料導 向決策上沒有顯著差異。 四、教育程度碩士(含四十學分班)及以上之學歷的行政人員,在資訊使用環 境與資料導向決策上均高於學士學歷之行政人員。 五、任職不同處室之行政人員知覺資訊使用環境上沒有顯著差異,任職教務處 行政人員知覺資料導向決策高於總務處。 六、學校規模為大型(49班以上)之行政人員知覺資訊使用環境高於25~48班 規模之學校,不同學校規模之行政人員知覺資料導向決策則無顯著差異。 七、學校歷史30年以下行政人員知覺資訊使用環境高於學校歷史31年~60年, 亦高於學校歷史61~90年;在知覺資料導向決策上,學校歷史30年以下與 學校歷史31~60年高於學校歷史61~90年。 八、資訊使用環境對資料導向決策具有正向且顯著的影響。 最後依據上述研究結論,提出具體建議,以作為教育行政機關、國中現職行政人員參考運用。 關鍵詞:資訊使用環境、資料導向決策 / This study aimed to explore the current situation of Taipei City junior high schools’ information use evironment and data-driven decision-making, analyze the different background variables and school variables on the different perception of information use environment and data-driven decision-making, and propose recommendations to relevant institutions based on the research conclusions. To achieve these purposes, this study used the questionnaire survey method, and the directors and administrative heads from 37 junior high schools in Taipei City were chosen as the subjects for the study. A total of 471 questionnaires were distributed, and resulted in 420 valid return questionnaires. The effective rate was 89.1%. Then, for this study, SPSS 17.0 for Windows and LISREL 8.80 statistical software used for analysis and the following conclusions were obtained: 1.The level of Taipei City junior high school administrators’ perception of information use environment and data-driven decision-making is ranked mid to high level. 2.Male administrators have a higher ability of perception of information use environment than female administrators. Male administrators also have a higher ability of perception of data-driven decision-making than female administrators. 3.There is no significant difference in different ages, teacher’s year, recent working years in administrators’ perception of information use environment and data-driven decision-making. 4.Administrators with graduate school education levels have a higher level on perception of information use environment and data-dirven decision-making than administrators with only college degree education levels. 5.There is no significant different in different administration offices in the ability of perception of information use environment, but the ability of perception of the administrators in the academic affair office for data-driven decision-making is better than those in general affair office. 6.Administrators who serve in schools with more than 49 classes show a higher level of involvement in information use environment than administrators from institutions that have 25 classes to 48 classes. The ability of perception of the administrators in the school size for data-driven decision-making has no significant different. 7.Administrators at the school history of less than 30 years, their perception on information use environment is performed better than schools with a history 31 years to 60 years and schools with a history 61 years to 90 years. Administrators at the school history of less than 30 years and school with a history 31 years to 60 years, their perception of data-driven decision-making is perfomed better than schools with a history 61 years to 90 years. 8.Information use environment has a significant and positive impact on data-driven decision-making. Finally, based on the above research conclusions, the specific recommendations were proposed as references and applications for educational administration and junion high school administration. Keywords: information use environment, data-driven decision- making
233

Acquiring 3D Full-body Motion from Noisy and Ambiguous Input

Lou, Hui 2012 May 1900 (has links)
Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices. The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.
234

A Fuzzy Software Prototype For Spatial Phenomena: Case Study Precipitation Distribution

Yanar, Tahsin Alp 01 October 2010 (has links) (PDF)
As the complexity of a spatial phenomenon increases, traditional modeling becomes impractical. Alternatively, data-driven modeling, which is based on the analysis of data characterizing the phenomena, can be used. In this thesis, the generation of understandable and reliable spatial models using observational data is addressed. An interpretability oriented data-driven fuzzy modeling approach is proposed. The methodology is based on construction of fuzzy models from data, tuning and fuzzy model simplification. Mamdani type fuzzy models with triangular membership functions are considered. Fuzzy models are constructed using fuzzy clustering algorithms and simulated annealing metaheuristic is adapted for the tuning step. To obtain compact and interpretable fuzzy models a simplification methodology is proposed. Simplification methodology reduced the number of fuzzy sets for each variable and simplified the rule base. Prototype software is developed and mean annual precipitation data of Turkey is examined as case study to assess the results of the approach in terms of both precision and interpretability. In the first step of the approach, in which fuzzy models are constructed from data, &quot / Fuzzy Clustering and Data Analysis Toolbox&quot / , which is developed for use with MATLAB, is used. For the other steps, the optimization of obtained fuzzy models from data using adapted simulated annealing algorithm step and the generation of compact and interpretable fuzzy models by simplification algorithm step, developed prototype software is used. If the accuracy is the primary objective then the proposed approach can produce more accurate solutions for training data than geographically weighted regression method. The minimum training error value produced by the proposed approach is 74.82 mm while the error obtained by geographically weighted regression method is 106.78 mm. The minimum error value on test data is 202.93 mm. An understandable fuzzy model for annual precipitation is generated only with 12 membership functions and 8 fuzzy rules. Furthermore, more interpretable fuzzy models are obtained when Gath-Geva fuzzy clustering algorithms are used during fuzzy model construction.
235

Processing Turkish Radiology Reports

Hadimli, Kerem 01 May 2011 (has links) (PDF)
Radiology departments utilize various visualization techniques of patients&rsquo / bodies, and narrative free text reports describing the findings in these visualizations are written by medical doctors. The information within these narrative reports is required to be extracted for medical information systems. Turkish is an highly agglutinative language and this poses problems in information retrieval and extraction from Turkish free texts. In this thesis one rule-based and one data-driven alternate methods for information retrieval and structured information extraction from Turkish radiology reports are presented. Contrary to previous studies in medical NLP systems, both of these methods do not utilize any medical lexicon or ontology. Information extraction is performed on the level of extracting medically related phrases from the sentence. The aim is to measure baseline performance Turkish language can provide for medical information extraction and retrieval, in isolation of other factors.
236

Real-time estimation of arterial performance measures using a data-driven microscopic traffic simulation technique

Henclewood, Dwayne Anthony 06 June 2012 (has links)
Traffic congestion is a one hundred billion dollar problem in the US. The cost of congestion has been trending upward over the last few decades, but has experienced slight decreases in recent years partly due to the impact of congestion reduction strategies. The impact of these strategies is however largely experienced on freeways and not arterials. This discrepancy in impact is partially linked to the lack of real-time, arterial traffic information. Toward this end, this research effort seeks to address the lack of arterial traffic information. To address this dearth of information, this effort developed a methodology to provide accurate estimates of arterial performance measures to transportation facility managers and travelers in real-time. This methodology employs transmitted point sensor data to drive an online, microscopic traffic simulation model. The feasibility of this methodology was examined through a series of experiments that were built upon the successes of the previous, while addressing the necessary limitations. The results from each experiment were encouraging. They successfully demonstrated the method's likely feasibility, and the accuracy with which field estimates of performance measures may be obtained. In addition, the method's results support the viability of a "real-world" implementation of the method. An advanced calibration process was also developed as a means of improving the method's accuracy. This process will in turn serve to inform future calibration efforts as the need for more robust and accurate traffic simulation models are needed. The success of this method provides a template for real-time traffic simulation modeling which is capable of adequately addressing the lack of available arterial traffic information. In providing such information, it is hoped that transportation facility managers and travelers will make more informed decisions regarding more efficient management and usage of the nation's transportation network.
237

Kompiuterinių žaidimų dirbtinio intelekto varikliuko uždaviniai ir jų sprendimas / The Problems and Solutions of Artificial Intelligence Engine for Games

Fiodorova, Jelena 30 May 2006 (has links)
Game Artificial Intelligence (AI) is the code in game that makes the computer-controlled opponents (agents) to make smart decisions in game world. There are some AI problems that are essential in games: pathfinding, decision making, generating player’s characteristics, game’s logic management. However these problems are gameplay dependant. The main goal of this study – to generalize AI problems’ solutions for games of many kinds, that is, to make AI solutions gameplay independent. We have achieved this goal by using data-driven design in our solutions. We separated the game logic and the game code levels by using this approach. Such separation gave us an opportunity to manipulate game logic and data freely. We have examined our decision making system and determined that it is flexible and is easy of use.
238

DATA ASSIMILATION AND VISUALIZATION FOR ENSEMBLE WILDLAND FIRE MODELS

Chakraborty, Soham 01 January 2008 (has links)
This thesis describes an observation function for a dynamic data driven application system designed to produce short range forecasts of the behavior of a wildland fire. The thesis presents an overview of the atmosphere-fire model, which models the complex interactions between the fire and the surrounding weather and the data assimilation module which is responsible for assimilating sensor information into the model. Observation plays an important role in data assimilation as it is used to estimate the model variables at the sensor locations. Also described is the implementation of a portable and user friendly visualization tool which displays the locations of wildfires in the Google Earth virtual globe.
239

A robust & reliable Data-driven prognostics approach based on extreme learning machine and fuzzy clustering.

Javed, Kamran 09 April 2014 (has links) (PDF)
Le Pronostic et l'étude de l'état de santé (en anglais Prognostics and Health Management (PHM)) vise à étendre le cycle de vie d'un actif physique, tout en réduisant les coûts d'exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédictions. En effet, des estimations précises de la durée de vie avant défaillance d'un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d'actions visant à accroître la sécurité, réduire les temps d'arrêt, assurer l'achèvement de la mission et l'efficacité de la production. Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type " boite noire " pour l'étude du comportement du système directement à partir des données de surveillance d'état, pour définir l'état actuel du system et prédire la progression future de défauts. Cependant, l'approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostics. Pour la compréhension de la modélisation de pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l'évolution de la dégradation ? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d'un cas à un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s'écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c'est à dire les conditions de fonctionnement, les variations de l'ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigences industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. Les principales contributions sont les suivantes. <br>- L'étape de traitement des données est améliorée par l'introduction d'une nouvelle approche d'extraction des caractéristiques à l'aide de fonctions trigonométriques et cumulatives qui sont basées sur trois caractéristiques : la monotonie, la "trendability" et la prévisibilité. L'idée principale de ce développement est de transformer les données brutes en indicateur qui améliorent la précision des prévisions à long terme. <br>- Pour tenir compte de la robustesse, la fiabilité et l'applicabilité, un nouvel algorithme de prédiction est proposé: Summation Wavelet-Extreme Learning Machine (SWELM). Le SW-ELM assure de bonnes performances de prédiction, tout en réduisant le temps d'apprentissage. Un ensemble de SW-ELM est également proposé pour quantifier l'incertitude et améliorer la précision des estimations. <br>- Les performances du pronostic sont également renforcées grâce à la proposition d'un nouvel algorithme d'évaluation de la santé: Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC est une approche de classification non supervisée qui utilise l'inférence de l'entropie maximale pour représenter l'incertitude de données multidimensionnelles. Elle peut automatiquement déterminer le nombre d'états, sans intervention humaine. <br>- Le modèle de pronostic final est obtenu en intégrant le SW-ELM et le S-MEFC pour montrer l'évolution de la dégradation de la machine avec des prédictions simultanées et l'estimation d'états discrets. Ce programme permet également de définir dynamiquement les seuils de défaillance et d'estimer le RUL des machines surveillées. Les développements sont validés sur des données réelles à partir de trois plates-formes expérimentales: PRONOSTIA FEMTO-ST (banc d'essai des roulements), CNC SIMTech (Les fraises d'usinage), C-MAPSS NASA (turboréacteurs) et d'autres données de référence. En raison de la nature réaliste de la stratégie d'estimation du RUL proposée, des résultats très prometteurs sont atteints. Toutefois, la perspective principale de ce travail est d'améliorer la fiabilité du modèle de pronostic.
240

Environmental prediction and risk analysis using fuzzy numbers and data-driven models

Khan, Usman Taqdees 17 December 2015 (has links)
Dissolved oxygen (DO) is an important water quality parameter that is used to assess the health of aquatic ecosystems. Typically physically-based numerical models are used to predict DO, however, these models do not capture the complexity and uncertainty seen in highly urbanised riverine environments. To overcome these limitations, an alternative approach is proposed in this dissertation, that uses a combination of data-driven methods and fuzzy numbers to improve DO prediction in urban riverine environments. A major issue of implementing fuzzy numbers is that there is no consistent, transparent and objective method to construct fuzzy numbers from observations. A new method to construct fuzzy numbers is proposed which uses the relationship between probability and possibility theory. Numerical experiments are used to demonstrate that the typical linear membership functions used are inappropriate for environmental data. A new algorithm to estimate the membership function is developed, where a bin-size optimisation algorithm is paired with a numerical technique using the fuzzy extension principle. The developed method requires no assumptions of the underlying distribution, the selection of an arbitrary bin-size, and has the flexibility to create different shapes of fuzzy numbers. The impact of input data resolution and error value on membership function are analysed. Two new fuzzy data-driven methods: fuzzy linear regression and fuzzy neural network, are proposed to predict DO using real-time data. These methods use fuzzy inputs, fuzzy outputs and fuzzy model coefficients to characterise the total uncertainty. Existing methods cannot accommodate fuzzy numbers for each of these variables. The new method for fuzzy regression was compared against two existing fuzzy regression methods, Bayesian linear regression, and error-in-variables regression. The new method was better able to predict DO due to its ability to incorporate different sources of uncertainty in each component. A number of model assessment metrics were proposed to quantify fuzzy model performance. Fuzzy linear regression methods outperformed probability-based methods. Similar results were seen when the method was used for peak flow rate prediction. An existing fuzzy neural network model was refined by the use of possibility theory based calibration of network parameters, and the use of fuzzy rather than crisp inputs. A method to find the optimum network architecture was proposed to select the number of hidden neurons and the amount of data used for training, validation and testing. The performance of the updated fuzzy neural network was compared to the crisp results. The method demonstrated an improved ability to predict low DO compared to non-fuzzy techniques. The fuzzy data-driven methods using non-linear membership functions correctly identified the occurrence of extreme events. These predictions were used to quantify the risk using a new possibility-probability transformation. All combination of inputs that lead to a risk of low DO were identified to create a risk tool for water resource managers. Results from this research provide new tools to predict environmental factors in a highly complex and uncertain environment using fuzzy numbers. / Graduate / 0543 / 0775 / 0388

Page generated in 0.0647 seconds