• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

用語彙概念構造(LCS)來說明kakeru構文的用法 ―ACT和BECOME擁有的成立點― / Explaination of Kakeru Structure using LCS─Perfect Point of Event that ACT and BECOME has─

城戶秀則 Unknown Date (has links)
到目前為止,針對出現在kakeru構文中的意義解釋上的差異,在現有的研究中並沒有做出明確的探討。本論文的目的,是利用LCS,有系統性地統一掌握該差異,並給予明確的說明。在過去的研究裡,說明了藉由動詞的aspect來確定kakeru構文的解釋是「動作開始前的用法」或是「動作發生途中的用法」。但該動詞分類的基準不明確,同時又侷限了對解釋的說明。即使試著做整體性的說明,卻還是會產生矛盾。因此,在本論文中,首次嘗試將LCS理論導入kakeru構文的研究,藉此來客觀整理每個動詞所擁有的成立點(語彙上的aspect),並將該成立點會對kakeru構文的解釋帶來何種影響,做出系統性的論述。    本稿共分為六章。在諸論,說明本論文的目的和動機、方法、架構、以及研究範圍。在第一章,舉出過去的研究,並指出其問題點。在第二章,用LCS來探討Vendler四分類的動詞擁有什麼樣的成立點之後,將本論文的主張做出整理,並提出理論。從第三章以後,根據第二章所提出的理論,如在第三章的activity、第四章的achievement、及第五章的accomplishment,根據過去的研究,將各個動詞再做更細部的分類,並舉出用例,來證明其理論的有效性。最後在第六章做出結論。 從以上論述,可將kakeru構文的意義總結為「動詞(句)表示的動作未到達成立點」,由此根據文脈,導出「将動相」「将変相」的解釋並做出結論。
12

none

Liu, Fang-chen 23 July 2008 (has links)
none
13

Diseño y Desarrollo de un Algoritmo de Detección de Patrones de Copia en Documentos Digitales

Zarate Rodriguez, Rodrigo Enrique January 2011 (has links)
Este trabajo de Tesis tiene por objetivo desarrollar un algoritmo aplicado a la detección de patrones de similitud entre documentos digitales, en el marco del proyecto Fondef D081-1015, DOCODE (DOcument COpy DEtector). Hay dos hechos fundamentales: no existe una cultura de educar al alumnado en base al respeto por la propiedad intelectual y el actual crecimiento del uso de las herramientas computacionales para la educación. Esto ha derivado en una práctica cada vez más que frecuente y con antecedentes a nivel internacional denominada “copy&paste”. Este problema ha conducido a desarrollar metodologías entre los alumnos para evitar ser detectados a la hora que deciden voluntaria o involuntariamente plagiar una o más fuentes, convirtiéndose para el profesor, en una lucha constante y muy lenta, o casi imposible de manejar dadas las cantidades de alumnos que poseen. Este trabajo se basa en la hipótesis de que es posible determinar la similitud entre documentos digitales mediante la detección de patrones de palabras aplicado al idioma español de mejor manera que traducir un detector de copia de una lengua extranjera, como se ha estado haciendo en los últimos años. El idioma español posee una estructura específica y una gran de sinonimia que hace que no sea eficiente tan solo aplicar los criterios lógicos usados en otros idiomas. Bajo este contexto, se crea un algoritmo basado en la búsqueda de secuencias comunes entre unidades de copia y se construye, mediante esto y una medida de distancia de edición, un prototipo capaz de tomar un grupo de documentos y entregar un indicador normalizado de la similitud existente entre dos documentos en particular. Este prototipo se somete a un experimento sobre una muestra de la base de documentos PAN-2010, en conjunto con otros detectores de similitud, el algoritmo LCS y la comparación por n-Gramas, bajo distintas condiciones: unidades de copia y tipos de copia, obteniendo distintos rendimientos en base a los indicadores: precisión, accuracy, recall y F-measure. El principal resultado encontrado es que considerando un mínimo de un 81% del largo de la unidad de copia deseada, es posible detectar copia independiente del caso estudiado. Se obtuvo una precisión y accuracy del 100% para la copia textual en todas unidades de copia. El modelo se encuentra bien calibrado para la copia no textual con un accuracy del 85.3%. El output se encuentra normalizado para entregar al usuario un resultado interpretable en términos porcentuales del nivel de similitud entre documentos. Se recomiendan unidades de medida como la oración o el párrafo, pues al tener un alfabeto finito y un algoritmo basado en la detección de secuencias comunes, el algoritmo sobrevalua la similitud en unidades de detección tan pequeñas, como la palabra. La principal línea de acción para el trabajo futuro es enfocarse en la detección de la copia no textual. Se aconseja la utilización de algoritmos de rankeo, específicamente n-Gramas, en conjunto con algoritmos de frecuencias de palabras, como TF-IDF, pues esto permite por un lado disminuir el universo de comparación y por otro, poder asociar determinados conceptos a temas característicos y adaptar la detección de similitud a un tema o área en particular. Finalmente, dadas las conversaciones con expertos en lingüística, en el largo plazo es ideal manejar indicadores asociados al individuo, de manera de detectar saltos extraordinarios en su desarrollo lingüístico, como lo son el léxico, la ortografía y la redacción, tarea que se basa en la hipótesis de que un profesor puede filtrar de manera menos estricta un comportamiento en base a sus experiencias anteriores con el individuo en particular.
14

Design and Investigation of a Multi Agent Based XCS Learning Classifier System with Distributed Rules

Pinseler, Mirko 27 February 2018 (has links)
This thesis has introduced and investigated a new kind of rule-based evolutionary online learning system. It addressed the problem of distributing the knowledge of a Learning Classifier System, that is represented by a population of classifiers. The result is a XCS-derived Learning Classifier System 'XCS with Distributed Rules' (XCS-DR) that introduces independent, interacting agents to distribute the system's acquired knowledge evenly. The agents act collaboratively to solve problem instances at hand. XCS-DR's design and architecture have been explained and its classification performance has been evaluated and scrutinized in detail in this thesis. While not reaching optimal performance, compared to the original XCS, it could be shown that XCS-DR still yields satisfactory classification results. It could be shown that in the simple case of applying only one agent, the introduced system performs as accurately as XCS.
15

A Performance Analysis Framework for Coreference Resolution Algorithms

Patel, Chandankumar Johakhim 29 August 2016 (has links)
No description available.
16

An exploratory analysis of littoral combat ships' ability to protect expeditionary strike groups

Efimba, Motale E. 09 1900
Approved for public release; distribution in unlimited. / This thesis uses an agent-based simulation model named EINSTein to perform an exploratory study on the feasibility of using Littoral Combat Ships (LCSs) to augment or replace the current defenses of Expeditionary Strike Groups (ESG). Specifically, LCS's ability to help defend an ESGs in an anti-access scenario against a high-density small boat attack is simulated. Numbers of CRUDES (CRUiser, DEStroyer, Frigate) ships are removed and LCSs are added to the ESG force structure in varying amounts to identify force mixes that minimize ship losses. In addition, this thesis explores various conceptual capabilities that might be given to LCS. For example, helicopter/Unmanned Combat Aerial Vehicles (helo/UCAVs), Stealth technology, close-in high volume firepower, and 50+ knot sprint capability. Using graphical analysis, analysis of variance, and large-sample comparison tests we find that being able to control aircraft is the most influential factor for minimizing ship losses. Stealth technology is another significant factor, and the combination of the two is highly effective in reducing ship losses. Close-in high volume firepower is effective only when interacting with helo/UCAVs or stealth. 50+ knot sprint capability is potentially detrimental in this scenario. An effective total sum of CRUDES ships and LCS is between five and seven platforms. / http://hdl.handle.net/10945/855 / Lieutenant, United States Navy
17

非意志性他動詞句之分析-從多義性與限制的觀點探討分類法- / Case study of non-volitionality transitive verb sentence in Japanese : from the polysemy and limitation to classification

張猷定, Jhang, You Ding Unknown Date (has links)
本論文旨在研究日語非意志性他動詞句之多義性、限制以及其分類基準的依據。研究方法主要以「他動性原型」的概念,對非意志性他動詞句所擁有的事象流程進行分析及分類。再以「語彙概念構造」(LCS)比較各類別之差異,並檢證非意志性他動詞句所存在的連續性。 本文共分為五章。第一章為緒論,簡述本論文所使用的研究方法。第二章則是從典型的他動詞句跟非意志性他動詞句的之過往相關文獻進行研究,並針對其分析提出尚未解決的問題。在第三章裡,以因果關係的觀點說明非意志性他動詞句所擁有的事象連鎖過程。並透過他動性特徵的「意志性」「制御性」「受影性」 ,區別並分類日語的非意志性他動詞句。在第四章,則是以「語彙概念構造」進一步對第三章導出的分類基準進行檢證,並分析非意志性他動詞句所帶有的多義性和連續性。最後第五章為結論。 在過去多數的研究裡,對於有著多樣性質的日語非意志性他動詞句往往不多加以區分,均將之視為同一種現象。而本文則是為了檢證其多義性的存在,藉由他動性原型和語彙概念構造的種種分析,說明了日語非意志性他動詞句在性質上亦可分為三種類別。 關鍵字: 他動性原型、意志性、制御性、受影性、LCS、因果関係、事象構造、多義性、連續性  本論の目的は、日本語の非意図的な他動詞文の分類の基準、制約さらに多義性を研究することである。本稿では、方法論として他動性プロトタイプと語彙概念構造に基づいて、非意図的な他動詞文が持つ事象過程、区別および連続性と下位分類を明らかにする。  本稿は5章で構成される。まず、第一章は序論で、使用する方法論を述べる。第二章「非意図的な他動詞文の認定」においては、まず典型的な他動詞文と非意図的な他動詞文の認定に関する先行研究を検討し、さらに問題点と筆者による定義を提出する。第三章「非意図的な他動詞文の事象連鎖による分類」においては、因果関係の観点から非意図的な他動詞文が有する事象連鎖を論じ、また、他動性特徴の「意図性」「コントロール性」「受影性」という概念を用いて、非意図的な他動詞文の下位分類をする。第四章「非意図的な他動詞文の多義性と制約」においては、語彙概念構造により第三章で得た分類をさらに検証し、非意図的な他動詞文の多義性・受身化および連続性を分析する。最後の第五章は結論である。  従来の研究では、日本語の非意図的な他動詞文は異なる性質を持つものの、すべて区別なく同一ものと見なされている。本稿では、その多義性を検証するために、他動性プロトタイプと語彙概念構造の概念で一連の分析をし、非意図的な他動詞文が三つのタイプに分類されることを明らかにした。 キーワード:他動性プロトタイプ、意図性、コントロール性、受影性、語彙概念構造、因果関係、事象構造、多義性、連続性
18

Flexoelectric and dielectric phenomena in helicoidal liquid crystals

Outram, Benjamin I. January 2013 (has links)
The unique features of flexoelectric and dielectric effects are investigated, and exploited for a variety of functions, in a wide range of helicoidal liquid crystal systems, including non-chiral, cholesteric and blue phases. Electrooptic techniques are developed to measure flexoelectric parameters in non-chiral and cholesteric liquid crystals using twisted nematic and Grandjean geometries respectively. A crystal rotation method, and using a lock-in amplifier, is used to enable the measurement of a very small e/K of 0.011 C/N<sup>-1</sup>m<sup>-1</sup>. Enhancement in chiral-flexoelectric switching is demonstrated theoretically in liquid crystals with negative dielectric anisotropy and in systems in which the pitch is constrained to be other than the natural pitch. A methodological framework for inducing stable Uniform Lying Helix alignment is developed based on weak homeotropic alignment conditions and a method to bias the helicoidal axis orientation; a series of approaches within this framework are demonstrated, including nano-grooved interfaces, periodic boundaries conditions, in-plane fields, and mould-templated micro-channels. The latter approach is potentially commercially viable for sub-millisecond electrooptic technology. The contribution to a cholesteric material's effective dielectric permittivity of flexoelectric polarization is formulated, and an ability to switch a cholesteric between Grandjean and lying-helix configurations based on the dispersion in the flexoelectric polarization and resultant relaxation in dielectric properties is demonstrated. The flexoelectric contribution to dielectric permittivity is exploited to enable switching in bistable reflective displays and alignment of the Uniform Lying Helix. The existence of a flexoelectric contribution to Kerr switching in blue phases is demonstrated, and a semi-empirical model for the effect is developed. The effect is the first known example of a non-polar flexoelectrooptic effect. Independent flexoelectric and dielectric contributions to Kerr switching in blue phases are measured experimentally by measuring the induced birefringence as a function of driving frequency in flexoelectric- and dielectric-dominated wide-temperature-range blue phase materials.
19

Littoral Combat Ship (LCS) manpower requirements analysis

Douangaphaivong, Thaveephone NMN. 12 1900 (has links)
Approved for public release; distribution in unlimited. / The Littoral Combat Ship's (LCS) minimally manned core crew goal is 15 to 50 manpower requirements and the threshold, for both core and mission-package crews, is 75 to 110. This dramatically smaller crew size will require more than current technologies and past lessons learned from reduced manning initiatives. Its feasibility depends upon changes in policy and operations, leveraging of future technologies and increased Workload Transfer from sea to shore along with an increased acceptance of risk. A manpower requirements analysis yielded a large baseline (200) requirement to support a notional LCS configuration. Combining the common systems from the General Dynamics and Lockheed Martin designs with other assumed equipments (i.e. the combined diesel and gas turbine (CODAG) engineering plant) produce the notional LCS configuration used as the manpower requirements basis. The baseline requirement was reduced through the compounded effect of manpower savings from Smart Ship and OME and suggested paradigm shifts. A Battle Bill was then created to support the notional LCS during Conditions of Readiness I and III. An efficient force deployment regime was adopted to reduce the overall LCS class manpower requirement. The efficiency gained enables the LCS force to "flex" and satisfy deployment requirements with 25% to 30% fewer manpower requirements over the "one-forone" crewing concept. costs $60K. / Lieutenant, United States Navy
20

The Binary String-to-String Correction Problem

Spreen, Thomas D. 30 August 2013 (has links)
String-to-String Correction is the process of transforming some mutable string M into an exact copy of some other string (the target string T), using a shortest sequence of well-defined edit operations. The formal STRING-TO-STRING CORRECTION problem asks for the optimal solution using just two operations: symbol deletion, and swap of adjacent symbols. String correction problems using only swaps and deletions are computationally interesting; in his paper On the Complexity of the Extended String-to-String Correction Problem (1975), Robert Wagner proved that the String-to-String Correction problem under swap and deletion operations only is NP-complete for unbounded alphabets. In this thesis, we present the first careful examination of the binary-alphabet case, which we call Binary String-to-String Correction (BSSC). We present several special cases of BSSC for which an optimal solution can be found in polynomial time; in particular, the case where T and M have an equal number of occurrences of a given symbol has a polynomial-time solution. As well, we demonstrate and prove several properties of BSSC, some of which do not necessarily hold in the case of String-to-String Correction. For instance: that the order of operations is irrelevant; that symbols in the mutable string, if swapped, will only ever swap in one direction; that the length of the Longest Common Subsequence (LCS) of the two strings is monotone nondecreasing during the execution of an optimal solution; and that there exists no correlation between the effect of a swap or delete operation on LCS, and the optimality of that operation. About a dozen other results that are applicable to Binary String-to-String Correction will also be presented. / Graduate / 0984 / 0715 / tspreen@gmail.com

Page generated in 0.0224 seconds