Spelling suggestions: "subject:"[een] WINDOW"" "subject:"[enn] WINDOW""
51 |
Structural and metamorphic evolution of the west-central Newton window, eastern Inner Piedmont, Burke, Catawba, and Lincoln Counties, North CarolinaGilliam, William George 01 August 2010 (has links)
Rocks of the western and eastern Inner Piedmont, along with the eastern Blue Ridge, comprise the Neoacadian metamorphic core of the southern Appalachians. The composite Inner Piedmont consists of the eastern Tugaloo (western Inner Piedmont) and Cat Square (eastern Inner Piedmont) terranes, which are separated by the Brindle Creek fault. Geochronologic evidence established the Brindle Creek fault as a terrane boundary within the Inner Piedmont, separating terranes of Laurentian and mixed Laurentian/Avalonian (peri-Gondwanan) zircon suites. The Newton window exposes Tugaloo terrane rocks of the Tallulah Falls Formation in the footwall of the Brindle Creek thrust sheet.
Detailed geologic mapping in the western Newton window revealed structural and metamorphic similarities between rocks across the Brindle Creek fault. Peak metamorphism occurred contemporaneously with peak deformation, reaching upper amphibolite facies across both terranes. Peak Neoacadian metamorphism occurred between 360 and 345 Ma. Electron microprobe analyses of Cat Square terrane core and rim garnet-biotite and garnet-plagioclase pairs indicate an average temperature and pressure of 620 C, 3.6 kbar and 710 C, 6.1 kbar, respectively. Temperature and pressure estimates from the lower Tallulah Falls Formation core and rim analyses yield conditions of 570 C, 4.1 kbar and 690 C, 5.9 kbar, respectively. The maximum burial depth for both Cat Square and Tugaloo terrane rocks is ~20 km. The range in metamorphic ages suggests subduction and accretion occurred at a rate of 1 kilometer per 1.75 million years.
Six deformational events shaped the western Newton window. D1 features are limited to amphibolite boudins of the Tugaloo terrane. D2 regional penetrative structures such as high-temperature foliations, mineral stretching lineations, and curved fold axes are the product of Neoacadian tectonism. The dominant S2 foliation trends north-northwest and dips moderately to the west-southwest. North-northwest-trending L2 mineral lineations parallel F2 fold axes, creating a curved map pattern recording crustal flow in an ancient orogenic channel. D3 resulted in open folding. The D4 event produced regional open folds. D5 and D6 features occur as joints, cataclasis, and diabase intrusion.
|
52 |
DD and WD costs : The development of a model for cutting costs in Dep. XHartvigson, Johannes, Cilingiroglu, Gunay, Palmén, Sara January 2008 (has links)
This paper is an exploratory case study of a logistics cost problem at Dep. X, and the normative purpose is to develop a model for Dep. X to aid in solving the problem of large damage costs. Dep. X is the logistic department of a warehouse located in a midsized Swedish city. The warehouse is part of an international furniture chain that operates in more than 40 countries worldwide and has an annual turnover of 211 billion Swedish SEK. This company has a clear cost focus and therefore, it is very important for the different departments to keep costs at a minimum. The problem that Dep. X is facing is related to damaged products, which can be further divided into Delivery Damages [DDs] (damages that are inflicted on products before the freight reaches the department) and Warehouse Damages [WDs] (damages that occur at the department). The warehouse has tried to solve the problem by forming a unit called ‘Cost hunters’. This group has some suspicions but has not yet found out the underlying reasons for the damaged goods. The thing that they know is that this is an unproportionally large cost for the warehouse, compared to other warehouses. In order for the researchers to investigate the cost issue, an abductive research strategy was used. The authors found out in an early stage of the research process that the problem was related to WDs and after a pre-study hypothesized four different problem areas to investigate: <ul type="disc">Flaws in Communicating Knowledge Flaws in Working Environment Flaws in Motivation Flaws Concerning Customers In order to establish whether or not these hypotheses were correct, a benchmarking study was conducted with a department under the same company, which was of basically the same size. Apart from the pre-study, a total of nine interviews were conducted; five at Dep. X and four at Dep. Y. The researchers also sent out surveys to both departments, conducted a damage levels study and had a meeting with a group manager at the distribution central in order to collect the needed information. After data collection, the data were processed and analyzed, and the researchers came to the conclusion that the problem with high WDs at Dep. X was related to, firstly, flaws in communicating knowledge and flaws in the working environment. The two other problem areas, flaws in motivation and flaws concerning customers, were also to some extent related to WDs, but this impact was not considered enough to be a major cause. The authors finish the thesis by illustrating with a model how the problems seem to have arisen and by suggesting improvement areas to deal with in order to eliminate the cost issue.
|
53 |
An Analysis of the Film Industry's Collapsing Video WindowOtis, Evan T 01 January 2012 (has links)
The collapse of the video window is disrupting the economic framework between exhibitors and distributors in the film industry. This study analyzes the collapse from several angles and provides a detailed description as to why the collapse has, and will continue to be, disruptive. I first examine the impact various technologies have had on the collapse of the video window – the time between a motion picture’s theatrical release and video release – during 1997 – 2012. The average video window has declined from 5 months 22days in 1997 to 3 months 29 days in 2012. Differences of means tests were used to inspect the average video window at the time of each technology’s introduction. Then in order to reveal how the length of the video window affects box office profit, I use an ordinary least squares regression to examine the determinants of gross domestic box office profit for a sample of 294 top earning U.S. films during 1999-2012.
|
54 |
An Obstruction-Check Approach to Mining Closed Sequential Patterns in Data StreamsChin, Tsz-lin 21 June 2010 (has links)
Online mining sequential patterns over data streams is an important problem
in data mining. There are many applications of using sequential patterns in data
streams, such as market analysis, network security, sensor networks and web track-
ing. Previous studies have shown mining closed patterns provides more benefits than
mining the complete set of frequent patterns, since closed pattern mining leads to
compact results. A sequential pattern is closed if the sequential pattern does not
have any supersequence which has the same support. Chang et al. proposed a time-
based sliding window model. The time-based sliding window has two features, the
new item is inserted in front of a sequence, and the obsolete item is removed from of
tail of a sequence. For solving the problem of data mining in the time-based sliding
window, Chang et al. proposed an algorithm called SeqStream. It uses a data struc-
ture IST (Inverse Closed Sequence Tree) to keep the result. IST can incrementally be
updated by the SeqStream algorithm. Although the SeqStream algorithm has used
the technique of dividing the time-based sliding window to speed up the updating of
IST, the SeqStream algorithm still scans the sliding window many times when IST
needs to be updated. In this thesis, we propose an obstruction-check approach to
maintain the result of closed sequential patterns. Our approach is designed based
on the lattice structure. The feature of the lattice structure is that the parent is a
supersequence of its children. By utilizing this feature, we decide the obstruction
link between the parent and child if their support is the same. If a node does not
have any obstruction link parent, the node is a closed sequential pattern. Then we
can utilize this feature to locally travel the lattice structure. Moreover, we can fully
utilize the features of the time-based sliding window model to locally travel the lat-
tice structure. Based on the lattice structure, we propose the EULB (Exact Update
based on Lattice structure with Bit stream)-Lattice algorithm. The EULB-Lattice
algorithm is an exact method for mining data streams. We record additional informa-
tion, instead of scanning the entire sliding window. We conduct several experiments
using different synthetic data sets. The simulation results show that the proposed
algorithm outperforms the SeqStream algorithm.
|
55 |
A New TFT with Trenched Body and Airgap-Insulated Structure for Capacitorless 1T-DRAM ApplicationChang, Tzu-feng 29 July 2010 (has links)
In this thesis, we propose a new thin-film transistor with trenched body and airgap-insulated structure (AITFT) for one-transistor dynamic random access memory (1T-DRAM) applications and investigate the influence of different materials on the sensing current window and retention time. Its basic operation mechanisms are based on the impact ionization and floating body effects. Due to the generated holes storing in the pseudo neutral region, the threshold voltage (Vth) is lower, resulting in a high drain current for state ¡§1¡¨. So we can recognize the data by sensing the difference of the drain current. According to the ISE TCAD 10.0 simulations, owing to the design of trench and airgap-isolation structure, the AITFT can enhance about 212% sensing current window and 42% retention time compared with the conventional TFT at the channel length of 150 nm and temperature of 300K conditions. Also, owing to the source/drain-tie, the generated heat can be dissipated quickly from the source/drain to the substrate thus the thermal instability is improved. In other words, the AITFT can improve the thermal reliability but without losing control of the short-channel effects.
|
56 |
A Set-Checking Algorithm for Mining Maximal Frequent Itemsets from Data StreamsLin, Pei-Ying 15 July 2011 (has links)
Online mining the maximal frequent itemsets over data streams is an important problem in data mining. The maximal frequent itemset is the itemset which the support is large or equal to the minimal support and the itemset is not the subset or superse of each itemset. Previous algorithms to mine the maximal frequent itemsets in the traditional database are not suitable for data streams. Because data streams have some characteristics: (1) continuous (2) fast (3) no data limit (4) real time (5) searching once, mining data streams have many new challenges. First, they are unrealistic to keep the entire stream in the main memory or even in a secondary storage area, since a data stream comes continuously and the amount of data is unbounded. Second, traditional methods of mining on stored datasets by multiple scans are infeasible, since the streaming data is passed only once. Third, mining streams requires fast, real-time processing in order to keep up with the high data arrival rate and mining results are expected to be available within short response time. In order to solve mining maximal frequent itemsets from data streams using the landmark window model, Mao et. al. propose the INSTANT algorithm. In the landmark window model, knowledge discovery is performed based on the values between the beginning time and the present. The advantage of using the landmark window model is that the results are correct as compared to the other models. The structure of the INSTANT algorithm is simple and it can save many memory space. But it takes long time in mining the maximal frequent itemsets. When the new transactions comes, the number of comparisons between the old transactions of INSATNT algorithm is too much. In this thesis, we propose the Set-Checking algorithm to mine frequent itemsets from data streams using the landmark window model. We use the structure of lattice to store our information. The structure of lattice records the subset relationship between the child node and the father node. For every node, we can record the itemset and the support. When the new transaction comes, we consider five relations: (1) equivalent (2) superset (3) subset (4) intersection (5) empty relations. According to the lattice structure of the five sets , we can add the transaction and the renew support efficiently. From our simulation result, we find that the process time of our Set-Checking algorithm is faster than that of the INSTANT algorithm.
|
57 |
A Subset-Lattice Algorithm for Mining Maximal Frequent Itemsets over a Data Stream Sliding WindowWang, Syuan-Yun 09 July 2012 (has links)
Online mining association rules in data streams is an important field in the data
mining. Among them, mining the maximal frequent itemsets is also an important
issue. A frequent itemset is called maximal if it is not a subset of any other frequent
itemset. The set of all the maximal frequent itemsets is denoted as the maximal
frequent itemset. Because data streams are continuous, high speed, unbounded, and
real time. As a result, we can only scan once for the data streams. Therefore, the
previous algorithms to mine the maximal frequent itemsets in the traditional
databases are not suitable for the data streams. Furthermore, many applications are
interested in the recent data streams, and the sliding window is the model which
deal with the most recent data streams. In the sliding window model, a window size
is required. One of the algorithms for mining the maximal frequent itemsets based
on the sliding window model is called the MFIoSSW algorithm. The MFIoSSW
algorithm uses a compact structure to mine the maximal frequent itemsets. It uses
an array-based structure A to store the maximal frequent itemsets and other helpful
itemsets. But it takes long time to mine the maximal frequent itemsets. When the
new transaction comes, the number of comparison between the new transaction and
the old transactions is too much. Therefore, in this project, we propose a sliding
window approach, the Subset-Lattice algorithm. We use the lattice structure to store
the information of the transactions. The structure of the lattice stores the relationship
between the child node and the father node. In each node, we record the itemset and
the support. When the new transaction comes, we consider five relations: (1)
equivalent, (2) subset, (3) intersection, (4) empty set, (5) superset. With this five
relations, we can add the new transactions and update the support efficiently.
|
58 |
Study of cloud properties from single-scattering, radiative forcing, and retrieval perspectivesLee, Yong-Keun 02 June 2009 (has links)
This dissertation reports on three different yet related topics in light scattering
computation, radiative transfer simulation, and remote sensing implementation,
regarding the cloud properties and the retrieval of cloud properties from satellite-based
infrared radiometric measurements. First, the errors associated with the use of circular
cylinders as surrogates for hexagonal columns in computing the optical properties of
pristine ice crystals at infrared (8-12 µm) wavelengths are investigated. It is found that
the differences between the results for circular cylinders and hexagonal columns are on
the order of a few percent at infrared wavelengths. Second, investigated in this
dissertation are the outgoing broadband longwave and window channel radiances at the
top-of-atmosphere under clear-sky conditions on the basis of the data acquired by the
Cloud and the Earth's Radiant Energy System (CERES) instrument onboard the NASA
Terra satellite platform. Based on the comparison of the observed broadband radiances
with those obtained from rigorous radiative transfer simulations, it is found that the
theoretical results tend to be larger than their measured counterparts. Extensive sensitivity studies regarding the uncertainties of various parameters were carried out.
Within the considered uncertainties of various factors, the computed radiances are still
larger than the observed radiances if thin cirrus clouds are excluded. Thus, a potential
cause for the differences could be associated with the presence of thin cirrus clouds
whose visible optical thickness is smaller than approximately 0.3. Third, presented in
this dissertation is an illustration of the application of hyperspectral infrared channel
observations to the retrieval of the cloud properties. Specifically, the hyperspectral
measurements acquired from the Atmospheric Infrared Sounder (AIRS) aboard the
NASA Aqua platform are used to infer cloud top pressure, effective cloud amount, cloud
thermodynamic phase, cloud optical thickness, and the effective size of cloud particles.
The AIRS-based retrievals are compared with the counterparts of the operational cloud
products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS).
The two retrievals agree reasonably well except for the retrieved cloud effective particle
size. Furthermore, the diurnal and seasonal contrasts of cloud properties are also
investigated on the basis of the cloud properties retrieved from the AIRS data.
|
59 |
An Improved PDA Multi-User Detector for DS-CDMA UWB SystemsLi, Tzung-Cheng 28 August 2005 (has links)
Ultra-Wideband technology has attracted interests of the researchers and commercial groups due to its advantage of high data rate, low complexity and low power consumption. The direct-sequence code division multiple access ultra wideband system (DS-CDMA UWB) is one of the proposal of IEEE 802.15.3a standard. By combing the power of both UWB and DS-CDMA techniques, the system could construct multiple access architecture using direct sequence method. In multi-user environment, the major problem of the receiver designing of conventional DS-CDMA system is multiple access interference(MAI). In DS-CDMA UWB system, the transmitted signal were interfered by inter-symbol interference(ISI) and neighbor symbol interference because of the multi-path channel characteristic.
In this thesis, we use the training method to get the spreading waveform influenced by multi-path. Based on the information of spreading waveform, we use the block method to reformulate the received signal. We can separate the interference into multiple access interference and neighbor symbol interference. With Combining the interference cancellation, probabilistic data association (PDA) filter and sliding window techniques, we could eliminate the interference. In the computer simulation section, we compare the detection performance of sliding window PDA detector with conventional detector, and the simulation result shows that the improved PDA detector has better performance than others.
|
60 |
On the Porting of Nano-X and Its Integration with OpenGL ESHsieh, Yen-Pin 10 February 2006 (has links)
¡@¡@Embedded systems often use several ways to emulate floating-point computation, due to the limitation of hardware resources and the performance/cost consideration. The first part of this thesis will discuss how systems without hardware support emulate floating point operation, and the performance difference between whether hardware floating-point units exists or not, and finally the performance evaluation between the usage of floating-point and fixed-point. In the mean time, we will also discuss the porting process of Nano-X fixed-point version to the Versatile PB/926EJ-S development board.
¡@¡@Due to the growing market demand and the big improvement of hardware, there are several 3D-display applications beginning to be presented to the public. In order to develop 3D programs, we need a standard API to reduce our development time. OpenGL~ES is a royalty-free, cross-platform API for full-function 2D and 3D graphics on embedded systems developed by The Khronos Group. The second part of this thesis will discuss the implementation of EGL---the interface between windowing system and the OpenGL~ES rendering API---and the GLUT (OpenGL Utility Toolkits)-like library, in order to make OpenGL programmers' life easier.
|
Page generated in 0.0329 seconds