Spelling suggestions: "subject:"willinging"" "subject:"4illing""
181 |
Low-Resource Natural Language Understanding in Task-Oriented DialogueLouvan, Samuel 11 March 2022 (has links)
Task-oriented dialogue (ToD) systems need to interpret the user's input to understand the user's needs (intent) and corresponding relevant information (slots). This process is performed by a Natural Language Understanding (NLU) component, which maps the text utterance into a semantic frame representation, involving two subtasks: intent classification (text classification) and slot filling (sequence tagging). Typically, new domains and languages are regularly added to the system to support more functionalities. Collecting domain-specific data and performing fine-grained annotation of large amounts of data every time a new domain and language is introduced can be expensive. Thus, developing an NLU model that generalizes well across domains and languages with less labeled data (low-resource) is crucial and remains challenging.
This thesis focuses on investigating transfer learning and data augmentation methods for low-resource NLU in ToD. Our first contribution is a study of the potential of non-conversational text as a source for transfer. Most transfer learning approaches assume labeled conversational data as the source task and adapt the NLU model to the target task. We show that leveraging similar tasks from non-conversational text improves performance on target slot filling tasks through multi-task learning in low-resource settings. Second, we propose a set of lightweight augmentation methods that apply data transformation on token and sentence levels through slot value substitution and syntactic manipulation. Despite its simplicity, the performance is comparable to deep learning-based augmentation models, and it is effective on six languages on NLU tasks. Third, we investigate the effectiveness of domain adaptive pre-training for zero-shot cross-lingual NLU. In terms of overall performance, continued pre-training in English is effective across languages. This result indicates that the domain knowledge learned in English is transferable to other languages. In addition to that, domain similarity is essential. We show that intermediate pre-training data that is more similar – in terms of data distribution – to the target dataset yields better performance.
|
182 |
Framtagande av ett fyllningssystem för jonbytare / Development of an filling system for the ion exchangeShahin, Hussein January 2022 (has links)
Detta arbete har genomförts i samarbete med Hitachi Sweden AB, beläget iLandskrona. Hitachi använder sig av fyra olika storlekar av tuber, som fyllsmed jonbytarmassa för att kunna användas i ett kylsystem. Det nuvarande fyllningssystemet kräver mycket arbetskraft av operatörerna och det tarlång tid att genomföra hela fyllningsprocessen. Därför önskar Hitachi att fåen lösning som effektiviserar och minskar tiden för operatören att fylla påvarje tub. Konceptframtagning utfördes först för att undersöka och definieraproduktens funktioner, utformning, egenskaper och andra viktiga faktorerför att skapa en vision för slutprodukten. Genom utvecklingsprocessen avhela fyllningssystemet för jonbytarmassan, användes CAD för konstruktion, finita elementmetoden samt ritningar för alla delar. Beräkningar användsför att validera systemet och visa om det uppfyller de specifika kraven ochmaterialvalen. Syftet med arbetet är att minska både arbetet för operatörerna och tiden som krävs för att påfylla, med fokus på att monteringenoch fyllningen ska vara enkel att utföra. Den slutliga konstruktionen bestårav en vibrationsmotor som ska tillföra vibration till en del av konstruktionen för att få massan och flöda genom, där stativet är gjord av låglegeratstål och ett kon av rostfritt stål där massan hälls ned och ett munstycke iABS-plast som kan bytas ut beroende på tubstorlek. Vibratorfjädrar sättspå konstruktionen för att eliminera vibration från hela konstruktionen ochendast ha det på konen och bottenplattan. / This work has been carried out in collaboration with Hitachi Sweden AB,located in Landskrona. Hitachi uses four different sizes of tubes, whichare filled with ion exchange resin to be used in a cooling system. Thecurrent filling system requires a lot of manual labor from the operators andtakes a long time to complete the entire filling process. Therefore, Hitachiwishes to have a solution that streamlines and reduces the time for theoperator to fill each tube. Concept development was first carried out toinvestigate and define the product’s functions, design, features, and otherimportant factors to create a vision for the final product. Throughout thedevelopment process of the entire ion exchange resin filling system, CADwas used for design, finite element method, and drawings for all parts.Calculations were used to validate the system and show whether it meetsthe specific requirements and material choices. The purpose of the workis to reduce both the workload for the operators and the time requiredfor filling, with a focus on making the assembly and filling process easyto perform. The final construction consists of a vibration motor that willprovide vibration to a part of the structure to facilitate the flow of themass. The stand is made of low-alloy steel, and a cone made of stainlesssteel is used for pouring the mass. The nozzle, made of ABS plastic, canbe replaced depending on the tube size. Vibrator springs are placed on thestructure to eliminate vibration from the entire construction and only haveit on the cone and base plate.
|
183 |
Linguistic Knowledge Transfer for Enriching Vector RepresentationsKim, Joo-Kyung 12 December 2017 (has links)
No description available.
|
184 |
Symmetric Generalized Gaussian Multiterminal Source CodingChang, Yameng Jr January 2018 (has links)
Consider a generalized multiterminal source coding system, where (l choose m) encoders, each m observing a distinct size-m subset of l (l ≥ 2) zero-mean unit-variance symmetrically correlated Gaussian sources with correlation coefficient ρ, compress their observation in such a way that a joint decoder can reconstruct the sources within a prescribed mean squared error distortion based on the compressed data. The optimal rate- distortion performance of this system was previously known only for the two extreme cases m = l (the centralized case) and m = 1 (the distributed case), and except when ρ = 0, the centralized system can achieve strictly lower compression rates than the distributed system under all non-trivial distortion constaints. Somewhat surprisingly, it is established in the present thesis that the optimal rate-distortion preformance of the afore-described generalized multiterminal source coding system with m ≥ 2 coincides with that of the centralized system for all distortions when ρ ≤ 0 and for distortions below an explicit positive threshold (depending on m) when ρ > 0. Moreover, when ρ > 0, the minimum achievable rate of generalized multiterminal source coding subject to an arbitrary positive distortion constraint d is shown to be within a finite gap (depending on m and d) from its centralized counterpart in the large l limit except for possibly the critical distortion d = 1 − ρ. / Thesis / Master of Applied Science (MASc)
|
185 |
Quantitative Hydrodynamics Analysis of Left Ventricular Diastolic Dysfunction using Color M-Mode EchocardiographyStewart, Kelley Christine 18 November 2008 (has links)
Numerous studies have shown that cardiac diastolic dysfunction and diastolic filling play a critical role in dictating overall cardiac health and demonstrated that the filling wave propagation speed is a significant index of the severity of diastolic dysfunction. However, the governing flow physics underlying the relationship between propagation speed and diastolic dysfunction are poorly understood. More importantly, currently there is no reliable metric to allow clinicians the ability to diagnose cardiac dysfunction. There is a greater need than ever for more accurate and robust diagnostic tools with the increasing number of deaths caused by this disease. Color M-mode (CMM) echocardiography is a technique that is commonly used in the diagnosis of Left Ventricular Diastolic Dysfunction (LVDD) and is used as the image modality in this work.
The motivation for the current work is a hypothesized change in the mechanism driving early diastolic filling. The early filling wave of a healthy patient is driven by a rapid early diastolic relaxation creating a pressure difference within the left ventricle despite the fact the left ventricular volume is increasing. As diastolic dysfunction progresses, the left ventricular relaxation declines and it is hypothesized that the left atrial pressure rises to create the favorable pressure difference needed to drive early diastole. This changes the mechanism driving early diastolic filling from a pulling mechanism primary driven by left ventricular relaxation to a pushing mechanism primarily driven by high left atrial pressure.
Within this study, CMM echocardiography images from 125 patients spanning healthy and the three stages of LVDD are analyzed using a newly developed automated algorithm. For the first time, a series of isovelocity contours is utilized to estimate the conventional propagation velocity. A critical point within the early filling wave is quantified as the point of early filling velocity deceleration. The clinically used propagation velocity is compared to a novel critical point propagation velocity calculated as a weighted average of the propagation velocities before and after the critical point showing an increase in the correlation between decreasing diastolic dysfunction stage and decreasing propagation velocity. For the first time the spatial pressure distributions calculated as the pressure relative to the mitral valve pressure at each location from the mitral valve to the ventricular apex, are quantified and analyzed at the instant of peak mitral to apical pressure difference for patients with varying stages of LVDD. The analysis of the spatial pressure distribution revealed three filling regions present in all patients. The pressure filling regions were used to calculate a useful filling efficiency with healthy patients having a useful filling efficiency of 64.8 ± 12.7% and severely diseased filling patients having an efficiency of 37.1 ± 12.1%. The newly introduced parameters and analysis of the CMM echocardiography data supports the hypothesis of a change in the mechanism driving early diastolic efficiency by displaying a decline in the early diastolic propagation velocity earlier into the left ventricle for severely diseased patients than for healthy filling patients and a premature breakup of the progressive pressure gradient fueling early diastolic filling in severely diseased patients. / Master of Science
|
186 |
Implementing Static Mesh Hole Filling in Unreal Engine 4.27Wallquist, Felix January 2024 (has links)
This project, completed in collaboration with Piktiv AB, aimed to develop an automated surface hole-filling feature for static meshes in Unreal Engine 4.27, with the goal of making repaired surfaces visually indistinguishable from their surrounding areas. The solution was primarily designed to address holes that arose from, but were not limited to, the use of Reduction Settings within Unreal Engine on static meshes. The functionality encompassed four key stages: boundary detection, where all holes on the mesh were identified; triangulation, which involved patching the hole using vertices from the boundary; refinement, entailing the addition of vertices and triangles to the patched area to mimic the density of the surrounding surface; and fairing, which smoothed the patched surface. Additionally, the project introduced a straightforward method for determining the texture coordinates of newly added vertices and a technique for ensuring that triangle normals correctly faced outward from the mesh. The Static Mesh Hole Filler, as implemented, demonstrates efficiency in filling an arbitrary amount of small, planar holes, which commonly result from polygon reduction using Reduction Settings in Unreal Engine. However, this function falls short in preserving unique texture details and maintaining the curvature of surfaces when dealing with larger holes. This limitation necessitates users to seek alternative methods for effectively repairing the mesh.
|
187 |
Some Advances in Local Approximate Gaussian ProcessesSun, Furong 03 October 2019 (has links)
Nowadays, Gaussian Process (GP) has been recognized as an indispensable statistical tool in computer experiments. Due to its computational complexity and storage demand, its application in real-world problems, especially in "big data" settings, is quite limited. Among many strategies to tailor GP to such settings, Gramacy and Apley (2015) proposed local approximate GP (laGP), which constructs approximate predictive equations by constructing small local designs around the predictive location under certain criterion. In this dissertation, several methodological extensions based upon laGP are proposed. One methodological contribution is the multilevel global/local modeling, which deploys global hyper-parameter estimates to perform local prediction. The second contribution comes from extending the laGP notion of "locale" to a set of predictive locations, along paths in the input space. These two contributions have been applied in the satellite drag emulation, which is illustrated in Chapter 3. Furthermore, the multilevel GP modeling strategy has also been applied to synthesize field data and computer model outputs of solar irradiance across the continental United States, combined with inverse-variance weighting, which is detailed in Chapter 4. Last but not least, in Chapter 5, laGP's performance has been tested on emulating daytime land surface temperatures estimated via satellites, in the settings of irregular grid locations. / Doctor of Philosophy / In many real-life settings, we want to understand a physical relationship/phenomenon. Due to limited resources and/or ethical reasons, it is impossible to perform physical experiments to collect data, and therefore, we have to rely upon computer experiments, whose evaluation usually requires expensive simulation, involving complex mathematical equations. To reduce computational efforts, we are looking for a relatively cheap alternative, which is called an emulator, to serve as a surrogate model. Gaussian process (GP) is such an emulator, and has been very popular due to fabulous out-of-sample predictive performance and appropriate uncertainty quantification. However, due to computational complexity, full GP modeling is not suitable for “big data” settings. Gramacy and Apley (2015) proposed local approximate GP (laGP), the core idea of which is to use a subset of the data for inference and further prediction at unobserved inputs. This dissertation provides several extensions of laGP, which are applied to several real-life “big data” settings. The first application, detailed in Chapter 3, is to emulate satellite drag from large simulation experiments. A smart way is figured out to capture global input information in a comprehensive way by using a small subset of the data, and local prediction is performed subsequently. This method is called “multilevel GP modeling”, which is also deployed to synthesize field measurements and computational outputs of solar irradiance across the continental United States, illustrated in Chapter 4, and to emulate daytime land surface temperatures estimated by satellites, discussed in Chapter 5.
|
188 |
Comparison of Two Algorithms for Removing Depressions and Delineating Flow Networks From Grid Digital Elevation ModelsSrivastava, Anurag 03 August 2000 (has links)
Digital elevation models (DEMs) and their derivatives such as slope, flow direction and flow accumulation maps, are used frequently as inputs to hydrologic and nonpoint source modeling. The depressions which are frequently present in DEMs may represent the actual topography, but are often the result of errors. Creating a depression-free surface is commonly required prior to deriving flow direction, flow accumulation, flow network, and watershed boundary maps. The objectives of this study were: 1) characterize the occurrence of depressions in 30m USGS DEMs and assess correlations to watershed topographic characteristics, and 2) compare the performance of two algorithms used to remove depressions and delineate flow networks from DEMs.
Sixty-six watersheds were selected to represent a range of topographic conditions characteristic of the Piedmont and Mountain and Valley regions of Virginia. Analysis was based on USGS 30m DEMs with elevations in integer meters. With few exceptions watersheds fell on single 7.5minute USGS quadrangle sheets, ranged in size from 450 to 3000 hectares, and had average slopes ranging from 3 to 20 percent. ArcView (3.1) with the Spatial Analyst (1.1) extension was used to summarize characteristics of each watershed including slope, elevation range, elevation standard deviation, curvature, channel slope, and drainage density. TOPAZ (ver 1.2) and ArcView were each used to generate a depression-free surface, flow network and watershed area. Characteristics of the areas 'cut' and 'filled' by the algorithms were compared to topographic characteristics of the watersheds. Blue line streams were digitized from scanned USGS 7.5minute topographic maps (DRGs) then rasterized at 30 m for analysis of distance from the derived flow networks.
The removal of depressions resulted in changes in elevation values in 0 - 11% of the cells in the watersheds. The percentage of area changed was higher in flatter watersheds. Changed elevation cells resulted in changes in two to three times as many cells in derivative flow direction, flow accumulation and slope grids. Mean fill depth by watershed ranged from 0 to 10 m, with maximum fill depths up to 40 m. In comparison with ArcView, TOPAZ, on average affected 30% fewer cells with less change in elevation. The significance of the difference between ArcView and TOPAZ decreased as watershed slope increased. A spatial assessment of the modified elevation and slope cells showed that depressions in the DEMs occur predominantly on or along the flow network. Flow networks derived by ArcView and TOPAZ were not significantly different from blue line streams digitized from the USGS quadrangles as indicated by a paired t test. Watershed area delineated by ArcView and TOPAZ was different for almost all watersheds, but was generally within 1%.
Conclusions from this study are: 1) The depressions in 30 m DEMs can make up a significant portion of the area especially for flatter watersheds; 2) The TOPAZ algorithm performed better than ArcView in minimizing the area modified in the process of creating a depressionless surface, particularly in flatter topography; 3) Areas affected by removing depressions are predominantly adjacent to the stream network; 4) For every elevation cell changed, slopes are changed for two to three cells, on average; and 5) ArcView and TOPAZ derived flow networks closely matched the blue line streams. / Master of Science
|
189 |
<b>Assessment of corn yield and physiological performance via fungicide placement and intensive management strategies</b>Malena Bartaburu Silva (19260820) 31 July 2024 (has links)
<p dir="ltr">In response to fluctuating corn (<i>Zea mays</i> L.) prices, climatic variability, and emerging diseases, farmers are increasingly adopting diverse and intensive management practices to enhance yield and profitability. This research investigates the performance of various inputs and management practices on corn production across multiple site-years, with a focus on yield components, grain fill duration, kernel development, disease severity, and economic outcomes. A multi-state research trial was established to evaluate the impact of seven inputs and management practices across multiple locations and environments in Indiana, Kentucky, and Michigan in 2022 and 2023. Each location included eight treatments: 1) control treatment (C) based on Purdue University seed rate and nitrogen (N) fertilizer recommendations (Camberato et al., 2022; Nielsen et al., 2022): 30K seeds per acre and N fertilizer application as starter (2x2) and V5 growth stage sidedress. Total N rates ranged between 180 and 200 lbs N per acre and agronomic optimum nitrogen rates (AONR) were used, 2) C + banded (2x2) fungicide, 3) C + 20% increase in corn seeding rate, 4) C + sulfur (S) fertilizer, 5) C + foliar micronutrients, 6) C + late-season N fertilizer application (V10-12 growth stage), 7) C + R1 foliar fungicide, and 8) intensive treatment (all additional inputs/management practices applied). The intensive treatment significantly increased yield by 16.4 and 18.4 bu ac<sup>-1</sup> in 2022 and 2023, respectively when compared to the control across locations, but did not enhance net profit across multiple corn price scenarios due to high application costs. Conversely, R1 fungicide applications increased yield by 16.2 and 16.7 bu ac<sup>-1</sup> in 2022 and 2023, respectively, and S applications increased yield by 12.9 bu ac<sup>-1</sup> in 2023, when compared to the control, with both treatments improving net profit under multiple corn price scenarios. In addition, kernel development studies in West Lafayette, IN, during 2022 and 2023 revealed that banded fungicide applications at planting and foliar fungicide applications at the R1 growth stage can reduce leaf disease severity by 3.2% to 6.6%, extend grain fill duration by 3.5 to 4.5 days, and increase maximum dry kernel weight at plant maturity by 5.7 to 9.4%, respectively, leading to further insights into the yield response mechanisms. Furthermore, a meta-analysis of 24 at-plant flutriafol fungicide placement trials across Indiana (2020 – 2023) highlighted the effectiveness of at-plant fungicides, with banded (2x2 or 2x0) applications leading to the highest yield increase of 7.8 bu ac<sup>-1</sup> and both banded and in-furrow applications reducing disease severity on corn ear leaves at the R5 growth stage by 2.1 - 2.3% when compared to the control. These findings suggest both at-plant banded and R1 foliar fungicide applications have the potential to reduce disease severity, extend corn grain fill duration, and improve yield when conditions are conducive for a response (e.g., foliar disease presence). Overall, this research highlights the ability of targeted input applications for improving both corn yield and profitability when examined across diverse environments and locations, rather than prophylactic applications of multiple inputs and increased management intensities.</p>
|
190 |
An In-Vitro Comparison of Microleakage With E. faecalis In Teeth With Root-End Fillings of Proroot MTA and Brasseler's EndoSequence Root Repair PuttyBrasseale, Beau J. (Beau John), 1980- January 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Brasseler USA (Savannah, GA) developed and introduced a bioceramic putty called EndoSequence Root Repair Material (ERRM) that can be used as a retrofilling material for surgical endodontics. The material is said to have many of the same chemical, physical, and biological properties as mineral trioxide aggregate (MTA), but with superior handling characteristics. The material is composed of calcium silicates, monobasic calcium phosphate, zirconium oxide, tantalum oxide, proprietary fillers, and thickening agents. ERRM is said by the manufacturer to bond to adjacent dentin, have no shrinkage, be highly biocompatible, hydrophilic, radiopaque, and antibacterial due to a high pH during setting. Investigations on the sealing properties of this material have not yet been conducted.
The purpose of this study was to compare the microbial leakage of Enterococcus faecalis in teeth with root-end fillings using ProRoot MTA and Brasseler’s ERRM in a dual-chamber bacterial leakage model as described by Torabinejad and colleagues. The aim of this investigation was to compare the bacterial microleakage of these two root-end filling materials exists.
Sixty-two human, single-rooted, mandibular premolars in which extraction was indicated were accessed and instrumented in an orthograde fashion with hand and rotary files. Root resection of the apical 3 mm was then completed and root-end retropreparations were created for placement of root-end filling material. Twenty-seven of these premolars had root-end fillings using ProRoot MTA and 27 had root-end fillings using ERRM. Two teeth were used as a positive control group with no root-end filling, and two other teeth were used as a negative control group and were sealed and coated with dentin bonding agent. The teeth were then evaluated for microleakage using a dual-chamber bacterial microleakage model for 40 days as described by Torabinejad and colleagues. Microleakage was determined by the presence of turbidity in the lower chamber of the apparatus and was assessed each day. Fresh samples of E. faecalis were used every three days to inoculate the apparatus and serve as a bacterial challenge for the materials. Results were recorded every day for 30 days. The outcome of interest (bacterial turbidity) and time-to-leakage (in days) were determined for each of the samples. Survival analysis was used to compare the two groups with a Kaplan-Meier plot to visualize the results and a nonparametric log-rank test for the group comparison.
The microleakage of ERRM was not statistically different (p > 0.05) than leakage of ProRoot MTA when subjected to E. faecalis over the 40 day observation period. Both groups had a small number of early failures (within 4 days) and no leakage was observed for the remaining 40 days of the study. Therefore, the null hypothesis was rejected.
The results of this research support the use of either of these two materials when compared with the controls. The microleakage of Brasseler’s EndoSequence Root Repair Material was at least as good as ProRoot Mineral Trioxide Aggregate when tested with E. faecalis.
|
Page generated in 0.242 seconds