• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 449
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 945
  • 165
  • 128
  • 107
  • 101
  • 96
  • 94
  • 94
  • 93
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Documenting the History of Oxygen Depletion in Lake St. Croix, Minnesota, Using Chironomidae Remains in the Sedimentary Record

Stewart, Caitlin E 01 January 2009 (has links) (PDF)
Lake St. Croix is a natural impoundment located at the southern end of the St. Croix River. Land use changes since European settlement (c. 1850) have resulted in nutrient runoff, eutrophication, and periodic oxygen depletion in the hypolimnion of Lake St. Croix. Establishing sound lake management practices requires knowledge of historical conditions obtained through paleoecological studies. Remains of non-biting midges (Insecta: Diptera Chironomidae) in lake sediments have been shown to be reliable indicators of past hypolimnetic oxygen conditions. Cores from two sub-basins in the lake were collected in 2006. Midge analysis indicated that shifts in species assemblages correspond to the times of land use change. Chironomus and Procladius, which are tolerant of low oxygen levels, increased in relative abundance as land use changes adversely impacted the St. Croix River’s watershed. Volume-weighted hypolimnetic oxygen concentrations were estimated using a transfer function developed for southern Ontario. Mean post-settlement chironomid reconstructed average volume-weighted hypolimnetic oxygen values were 0.73 mg/L lower than mean pre-settlement values for sub-basin 1, near Prescott, WI and 0.45 mg/L lower for sub-basin 3, near Lakeland, MN. These results indicate that oxygen depletion has occurred in the lake since the time of European settlement, and are supported by increases in the relative abundance of eutrophic midge bioindicators and the decrease in relative abundance of bioindicators of less productive conditions since the 1850s. This study, in conjunction with other historical and paleoecological studies of Lake St. Croix, provides historical data for setting management goals and strategies for Lake St. Croix.
442

A Quantitative Framework for Constructing a Multi-Asset CTA with a Momentum-Based Approach

Fällström, Rebecca January 2023 (has links)
Commodity Trading Advisors (CTAs) have gained popularity due to their abilities to generate an absolute return strategy. Little is known about how CTAs work and what variables are important to tune in order to create a profitable strategy. Some investors use CTA-like strategies to leverage their portfolio and create positive returns in times when the spot market is falling. The report is written for Skandinaviska Enskilda Banken and aims to give the bank and readers an understanding on how changes of parameters in a CTA strategy change the outcome of it with focus on three main measurements: Sharpe ratio, drawdown and total return.  The foundation of CTAs is that they rely on signals from some given sets of assets and make investments decisions solely based on them. CTAs can be rule-based with a binomial signal, or they can use a continual signal, like in the report. The thesis aims to recreate a CTA using a continuous momentum signal and with the signal, invest accordingly. Some different variables were tested, most importantly the report focuses on the weights of the assets and investigates if the momentum signal is good as it is or if a risk parity weighting is needed on top of the signal in order to generate a return that matches the expectations of a low drawdown and a high Sharpe ratio.  Beyond the weight allocation, different lookback periods of both the signal and weight were tested. A shorter lookback generated a quicker return that was more sensible to short trends on the market. Which in some cases was profitable but it also lost more of it accumulated return when the trend was "false". The equally weighted signal that only takes the trend into account when allocating the weights of the assets was more volatile it its returns and benefited from a long signal. The CTA results presented can only be seen as an index since it is rebalanced every rebalancing point, the frequency of those points was examined and the strategy was performing well if rebalanced once a week or once a month, every day and once a year did not yield a better result.  As expected, the CTA benefits from trend on the market, no matter the direction of it. The best periods for the CTA were when the market was very volatile, mainly 2008 and 2022. When there is no clear trend, the CTA reacts too slowly and often loses money. One important conclusion is that the CTA never should be used as an investment strategy on its own, rather as a hedging strategy that allocates a fraction of a total long-only portfolio.
443

Dynamic Contrast-Enhanced MRI and Diffusion-Weighted MRI for the Diagnosis of Bladder Cancer

Nguyen, Huyen Thanh 12 July 2013 (has links)
No description available.
444

The Minimum Rank Problem for Outerplanar Graphs

Sinkovic, John Henry 05 July 2013 (has links) (PDF)
Given a simple graph G with vertex set V(G)={1,2,...,n} define S(G) to be the set of all real symmetric matrices A such that for all i not equal to j, the ijth entry of A is nonzero if and only if ij is in E(G). The range of the ranks of matrices in S(G) is of interest and can be determined by finding the minimum rank. The minimum rank of a graph, denoted mr(G), is the minimum rank achieved by a matrix in S(G). The maximum nullity of a graph, denoted M(G), is the maximum nullity achieved by a matrix in S(G). Note that mr(G)+M(G)=|V(G)| and so in finding the maximum nullity of a graph, the minimum rank of a graph is also determined. The minimum rank problem for a graph G asks us to determine mr(G) which in general is very difficult. A simple graph is planar if there exists a drawing of G in the plane such that any two line segments representing edges of G intersect only at a point which represents a vertex of G. A planar drawing partitions the rest of the plane into open regions called faces. A graph is outerplanar if there exists a planar drawing of G such that every vertex lies on the outer face. We consider the class of outerplanar graphs and summarize some of the recent results concerning the minimum rank problem for this class. The path cover number of a graph, denoted P(G), is the minimum number of vertex-disjoint paths needed to cover all the vertices of G. We show that for all outerplanar graphs G, P(G)is greater than or equal to M(G). We identify a subclass of outerplanar graphs, called partial 2-paths, for which P(G)=M(G). We give a different characterization for another subset of outerplanar graphs, unicyclic graphs, which determines whether M(G)=P(G) or M(G)=P(G)-1. We give an example of a 2-connected outerplanar graph for which P(G) ≥ M(G).A cover of a graph G is a collection of subgraphs of G such that the union of the edge sets of the subgraphs is equal to the E(G). The rank-sum of a cover C of G is denoted as rs(C) and is equal to the sum of the minimum ranks of the subgraphs in C. We show that for an outerplanar graph G, there exists an edge-disjoint cover of G consisting of cliques, stars, cycles, and double cycles such that the rank-sum of the cover is equal to the minimum rank of G. Using the fact that such a cover exists allows us to show that the minimum rank of a weighted outerplanar graph is equal to the minimum rank of its underlying simple graph.
445

Latency-aware Optimization of the Existing Service Mesh in Edge Computing Environment

Sun, Zhen January 2019 (has links)
Edge computing, as an approach to leveraging computation capabilities located in different places, is widely deployed in the industry nowadays. With the development of edge computing, many big companies move from the traditional monolithic software architecture to the microservice design. To provide better performance of the applications which contain numerous loosely coupled modules that are deployed among multiple clusters, service routing among multiple clusters needs to be effective. However, most existing solutions are dedicated to static service routing and load balancing strategy, and thus the performance of the application cannot be effectively optimized when network condition changes.To address the problem mentioned above, we proposed a dynamic weighted round robin algorithm and implemented it on top of the cutting edge service mesh Istio. The solution is implemented as a Docker image called RoutingAgent, which is simple to deployed and managed. With the RoutingAgent running in the system, the weights of the target routing clusters will be dynamically changed based on the detected inter-cluster network latency. Consequently, the client-side request turnaround time will be decreased.The solution is evaluated in an emulated environment. Compared to the Istio without RoutingAgent, the experiment results show that the client-side latency can be effectively minimized by the proposed solution in the multicluster environment with dynamic network conditions. In addition to minimizing response time, emulation results demonstrate that loads of each cluster are well balanced. / Edge computing, som ett tillvägagångssätt för att utnyttja beräkningsfunktioner som finns på olika ställen, används i stor utsträckning i branschen nuförtiden. Med utvecklingen av kantdatabasen flyttar många stora företag från den traditionella monolitiska mjukvaruarkitekturen till mikroserviceteknik. För att ge bättre prestanda för de applikationer som innehåller många löst kopplade moduler som distribueras bland flera kluster, måste service routing bland flera kluster vara effektiva. De flesta befintliga lösningarna är dock dedikerade till statisk service-routing och belastningsbalanseringsstrategi, vilket gör att programmets prestanda inte effektivt kan optimeras när nätverksförhållandena ändras.För att ta itu med problemet som nämnts ovan föreslog vi en dynamisk viktad round robin-algoritm och implementerade den ovanpå den avancerade servicenätverket Istio. Lösningen implementeras som en Docker-bild som heter RoutingAgent, som är enkel att distribuera och hantera. Med agenten som körs i systemet ändras vikten av målruteringsklustret dynamiskt baserat på den upptäckta interklusternätets latens. Följaktligen kommer klientsidans begäran om omställningstid att minskas.Lösningen utvärderas i en emulerad miljö. Jämfört med Istio utan agent visar experimentresultaten att klientens latentitet effektivt kan minimeras av den föreslagna lösningen i multicluster-miljö med dynamiska nätverksförhållanden. Förutom att minimera responstid visar emuleringsresultat att belastningar i varje kluster är välbalanserade.
446

Investigation of deep learning approaches for overhead imagery analysis / Utredning av djupinlärningsmetoder för satellit- och flygbilder

Gruneau, Joar January 2018 (has links)
Analysis of overhead imagery has a great potential to produce real-time data cost-effectively. This can be an important foundation for decision-making for businesses and politics. Every day a massive amount of new satellite imagery is produced. To fully take advantage of these data volumes a computationally efficient pipeline is required for the analysis. This thesis proposes a pipeline which outperforms the Segment Before you Detect network [6] and different types of fast region based convolutional neural networks [61] with a large margin in a fraction of the time. The model obtains a prediction error for counting cars of 1.67% on the Potsdam dataset and increases the vehiclewise F1 score on the VEDAI dataset from 0.305 reported by [61] to 0.542. This thesis also shows that it is possible to outperform the Segment Before you Detect network in less than 1% of the time on car counting and vehicle detection while also using less than half of the resolution. This makes the proposed model a viable solution for large-scale satellite imagery analysis. / Analys av flyg- och satellitbilder har stor potential att kostnadseffektivt producera data i realtid för beslutsfattande för företag och politik. Varje dag produceras massiva mängder nya satellitbilder. För att fullt kunna utnyttja dessa datamängder krävs ett beräkningseffektivt nätverk för analysen. Denna avhandling föreslår ett nätverk som överträffar Segment Before you Detect-nätverket [6] och olika typer av snabbt regionsbaserade faltningsnätverk [61]  med en stor marginal på en bråkdel av tiden. Den föreslagna modellen erhåller ett prediktionsfel för att räkna bilar på 1,67% på Potsdam-datasetet och ökar F1- poängen for fordons detektion på VEDAI-datasetet från 0.305 rapporterat av [61]  till 0.542. Denna avhandling visar också att det är möjligt att överträffa Segment Before you Detect-nätverket på mindre än 1% av tiden på bilräkning och fordonsdetektering samtidigt som den föreslagna modellen använder mindre än hälften av upplösningen. Detta gör den föreslagna modellen till en attraktiv lösning för storskalig satellitbildanalys.
447

Learning, Detection, Representation, Indexing And Retrieval Of Multi-agent Events In Videos

Hakeem, Asaad 01 January 2007 (has links)
The world that we live in is a complex network of agents and their interactions which are termed as events. An instance of an event is composed of directly measurable low-level actions (which I term sub-events) having a temporal order. Also, the agents can act independently (e.g. voting) as well as collectively (e.g. scoring a touch-down in a football game) to perform an event. With the dawn of the new millennium, the low-level vision tasks such as segmentation, object classification, and tracking have become fairly robust. But a representational gap still exists between low-level measurements and high-level understanding of video sequences. This dissertation is an effort to bridge that gap where I propose novel learning, detection, representation, indexing and retrieval approaches for multi-agent events in videos. In order to achieve the goal of high-level understanding of videos, firstly, I apply statistical learning techniques to model the multiple agent events. For that purpose, I use the training videos to model the events by estimating the conditional dependencies between sub-events. Thus, given a video sequence, I track the people (heads and hand regions) and objects using a Meanshift tracker. An underlying rule-based system detects the sub-events using the tracked trajectories of the people and objects, based on their relative motion. Next, an event model is constructed by estimating the sub-event dependencies, that is, how frequently sub-event B occurs given that sub-event A has occurred. The advantages of such an event model are two-fold. First, I do not require prior knowledge of the number of agents involved in an event. Second, no assumptions are made about the length of an event. Secondly, after learning the event models, I detect events in a novel video by using graph clustering techniques. To that end, I construct a graph of temporally ordered sub-events occurring in the novel video. Next, using the learnt event model, I estimate a weight matrix of conditional dependencies between sub-events in the novel video. Further application of Normalized Cut (graph clustering technique) on the estimated weight matrix facilitate in detecting events in the novel video. The principal assumption made in this work is that the events are composed of highly correlated chains of sub-events that have high conditional dependency (association) within the cluster and relatively low conditional dependency (disassociation) between clusters. Thirdly, in order to represent the detected events, I propose an extension of CASE representation of natural languages. I extend CASE to allow the representation of temporal structure between sub-events. Also, in order to capture both multi-agent and multi-threaded events, I introduce a hierarchical CASE representation of events in terms of sub-events and case-lists. The essence of the proposition is that, based on the temporal relationships of the agent motions and a description of its state, it is possible to build a formal description of an event. Furthermore, I recognize the importance of representing the variations in the temporal order of sub-events, that may occur in an event, and encode the temporal probabilities directly into my event representation. The proposed extended representation with probabilistic temporal encoding is termed P-CASE that allows a plausible means of interface between users and the computer. Using the P-CASE representation I automatically encode the event ontology from training videos. This offers a significant advantage, since the domain experts do not have to go through the tedious task of determining the structure of events by browsing all the videos. Finally, I utilize the event representation for indexing and retrieval of events. Given the different instances of a particular event, I index the events using the P-CASE representation. Next, given a query in the P-CASE representation, event retrieval is performed using a two-level search. At the first level, a maximum likelihood estimate of the query event with the different indexed event models is computed. This provides the maximum matching event. At the second level, a matching score is obtained for all the event instances belonging to the maximum matched event model, using a weighted Jaccard similarity measure. Extensive experimentation was conducted for the detection, representation, indexing and retrieval of multiple agent events in videos of the meeting, surveillance, and railroad monitoring domains. To that end, the Semoran system was developed that takes in user inputs in any of the three forms for event retrieval: using predefined queries in P-CASE representation, using custom queries in P-CASE representation, or query by example video. The system then searches the entire database and returns the matched videos to the user. I used seven standard video datasets from the computer vision community as well as my own videos for testing the robustness of the proposed methods.
448

Information Retrieval Performance Enhancement Using The Average Standard Estimator And The Multi-criteria Decision Weighted Set

Ahram, TAREQ 01 January 2008 (has links)
Information retrieval is much more challenging than traditional small document collection retrieval. The main difference is the importance of correlations between related concepts in complex data structures. These structures have been studied by several information retrieval systems. This research began by performing a comprehensive review and comparison of several techniques of matrix dimensionality estimation and their respective effects on enhancing retrieval performance using singular value decomposition and latent semantic analysis. Two novel techniques have been introduced in this research to enhance intrinsic dimensionality estimation, the Multi-criteria Decision Weighted model to estimate matrix intrinsic dimensionality for large document collections and the Average Standard Estimator (ASE) for estimating data intrinsic dimensionality based on the singular value decomposition (SVD). ASE estimates the level of significance for singular values resulting from the singular value decomposition. ASE assumes that those variables with deep relations have sufficient correlation and that only those relationships with high singular values are significant and should be maintained. Experimental results over all possible dimensions indicated that ASE improved matrix intrinsic dimensionality estimation by including the effect of both singular values magnitude of decrease and random noise distracters. Analysis based on selected performance measures indicates that for each document collection there is a region of lower dimensionalities associated with improved retrieval performance. However, there was clear disagreement between the various performance measures on the model associated with best performance. The introduction of the multi-weighted model and Analytical Hierarchy Processing (AHP) analysis helped in ranking dimensionality estimation techniques and facilitates satisfying overall model goals by leveraging contradicting constrains and satisfying information retrieval priorities. ASE provided the best estimate for MEDLINE intrinsic dimensionality among all other dimensionality estimation techniques, and further, ASE improved precision and relative relevance by 10.2% and 7.4% respectively. AHP analysis indicates that ASE and the weighted model ranked the best among other methods with 30.3% and 20.3% in satisfying overall model goals in MEDLINE and 22.6% and 25.1% for CRANFIELD. The weighted model improved MEDLINE relative relevance by 4.4%, while the scree plot, weighted model, and ASE provided better estimation of data intrinsic dimensionality for CRANFIELD collection than Kaiser-Guttman and Percentage of variance. ASE dimensionality estimation technique provided a better estimation of CISI intrinsic dimensionality than all other tested methods since all methods except ASE tend to underestimate CISI document collection intrinsic dimensionality. ASE improved CISI average relative relevance and average search length by 28.4% and 22.0% respectively. This research provided evidence supporting a system using a weighted multi-criteria performance evaluation technique resulting in better overall performance than a single criteria ranking model. Thus, the weighted multi-criteria model with dimensionality reduction provides a more efficient implementation for information retrieval than using a full rank model.
449

The Effects of a Uniformly Weighted Exercise Suit on Biomarkers of Bone Turnover in Response to Aerobic Exercise in Postmenopausal Women with Low Bone Density

Terndrup, Haley Frances, Ventura, Alison K, Hagobian, Todd, Hazelwood, Scott 01 July 2016 (has links) (PDF)
Current options for maintaining or slowing aging-related bone mineral density (BMD) loss in postmenopausal women primarily include pharmaceutical agents. More recently, physical activity and exercise have been suggested as highly effective, low cost alternatives. Weighted aerobic exercise, utilizing load carriage systems (LCS), is known to increase the gravitational forces impacting bone, creating a higher osteogenic stimulus than standard aerobic exercise. In response to the positive research on aerobic exercise with well-designed LCS, Dr. Lawrence Petrakis, MD, developed a unique 5.44 kg uniformly weighted exercise suit. This study aimed to examine the effects of the uniformly weighted exercise suit on serum biochemical markers of bone formation (Amino- Propeptide of Type 1 Collagen [P1NP]; Carboxy-Terminal Propeptide of Type 1 Collagen [P1CP] and resorption (Carboxy-Terminal Telopeptide of Type 1 Collagen [CTX]) in response to submaximal aerobic exercise in postmenopausal women with low bone density. Nine volunteer, sedentary to lightly active, healthy postmenopausal women (Age: 58.7±1.1 years, BMI: 28.2±1.0, BMD T-score: -1.2±0.5) participated in this within-subjects study, wherein each participant exercised under two counterbalanced conditions (aerobic exercise with [ES] or without [NS] the exercise suit). During each condition, participants walked on a treadmill at 65%-75% of their age-predicted maximum heart rate until they reached their goal caloric expenditure (400kcal). There was a seven-day washout period between sessions. Serum was processed using ELISA protocols to investigate the change in biomarker at 24 and 72 hours post exercise, relative to baseline. The results indicated, when compared to the NS condition, the ES condition elicited a greater positive change in P1CP at 24 hours (Phours following exercise (P0.05). There was no effect of condition on P1NP at any time point (P>0.05). In sum, submaximal aerobic exercise while wearing the uniformly weighted exercise suit elicited an antiresorptive effect on bone collagen resorption with a simultaneous increase in bone collagen formation 24 hours post exercise.
450

Efficient Sampling Plans for Control Charts When Monitoring an Autocorrelated Process

Zhong, Xin 15 March 2006 (has links)
This dissertation investigates the effects of autocorrelation on the performances of various sampling plans for control charts in detecting special causes that may produce sustained or transient shifts in the process mean and/or variance. Observations from the process are modeled as a first-order autoregressive process plus a random error. Combinations of two Shewhart control charts and combinations of two exponentially weighted moving average (EWMA) control charts based on both the original observations and on the process residuals are considered. Three types of sampling plans are investigated: samples of n = 1, samples of n > 1 observations taken together at one sampling point, or samples of n > 1 observations taken at different times. In comparing these sampling plans it is assumed that the sampling rate in terms of the number of observations per unit time is fixed, so taking samples of n = 1 allows more frequent plotting. The best overall performance of sampling plans for control charts in detecting both sustained and transient shifts in the process is obtained by taking samples of n = 1 and using an EWMA chart combination with a observations chart for mean and a residuals chart for variance. The Shewhart chart combination with the best overall performance, though inferior to the EWMA chart combination, is based on samples of n > 1 taken at different times and with a observations chart for mean and a residuals chart for variance. / Ph. D.

Page generated in 0.024 seconds