• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 70
  • 17
  • 13
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 301
  • 64
  • 51
  • 38
  • 25
  • 25
  • 22
  • 21
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Integrated design and control optimization of hybrid electric marine propulsion systems based on battery performance degradation model

Chen, Li 13 September 2019 (has links)
This dissertation focuses on the introduction and development of an integrated model-based design and optimization platform to solve the optimal design and optimal control, or hardware and software co-design, problem for hybrid electric propulsion systems. Specifically, the hybrid and plug-in hybrid electric powertrain systems with diesel and natural gas (NG) fueled compression ignition (CI) engines and large Li-ion battery energy storage system (ESS) for propelling a hybrid electric marine vessel are investigated. The combined design and control optimization of the hybrid propulsion system is formulated as a bi-level, nested optimization problem. The lower-level optimization applies dynamic programming (DP) to ensure optimal energy management for each feasible powertrain system design, and the upper-level global optimization aims at identifying the optimal sizes of key powertrain components for the powertrain system with optimized control. Recently, Li-ion batteries became a promising ESS technology for electrified transportation applications. However, these costly Li-ion battery ESSs contribute to a large portion of the powertrain electrification and hybridization costs and suffer a much shorter lifetime compared to other key powertrain components. Different battery performance modelling methods are reviewed to identify the appropriate degradation prediction approach. Using this approach and a large set of experimental data, the performance degradation and life prediction model of LiFePO4 type battery has been developed and validated. This model serves as the foundation for determining the optimal size of battery ESS and for optimal energy management in powertrain system control to achieve balanced reduction of fuel consumption and the extension of battery lifetime. In modelling and design of different hybrid electric marine propulsion systems, the life cycle cost (LCC) model of the cleaner, hybrid propulsion systems is introduced, considering the investment, replacement and operational costs of their major contributors. The costs of liquefied NG (LNG), diesel and electricity in the LCC model are collected from various sources, with a focus on present industrial price in British Columbia, Canada. The greenhouse gas (GHG) and criteria air pollutant (CAP) emissions from traditional diesel and cleaner NG-fueled engines with conventional and optimized hybrid electric powertrains are also evaluated. To solve the computational expensive nested optimization problem, a surrogate model-based (or metamodel-based) global optimization method is used. This advanced global optimization search algorithm uses the optimized Latin hypercube sampling (OLHS) to form the Kriging model and uses expected improvement (EI) online sampling criterion to refine the model to guide the search of global optimum through a much-reduced number of sample data points from the computationally intensive objective function. Solutions from the combined hybrid propulsion system design and control optimization are presented and discussed. This research has further improved the methodology of model-based design and optimization of hybrid electric marine propulsion systems to solve complicated co-design problems through more efficient approaches, and demonstrated the feasibility and benefits of the new methods through their applications to tugboat propulsion system design and control developments. The resulting hybrid propulsion system with NG engine and Li-ion battery ESS presents a more economical and environmentally friendly propulsion system design of the tugboat. This research has further improved the methodology of model-based design and optimization of hybrid electric marine propulsion systems to solve complicated co-design problems through more efficient approaches, and demonstrated the feasibility and benefits of the new methods through their applications to tugboat propulsion system design and control developments. Other main contributions include incorporating the battery performance degradation model to the powertrain size optimization and optimal energy management; performing a systematic design and optimization considering LCC of diesel and NG engines in the hybrid electric powertrains; and developing an effective method for the computational intensive powertrain co-design problem. / Graduate
242

A Reporting System for a Device Management Application

Svensson, Marcus January 2009 (has links)
<p>Device Management Applications are applications which are used to manage software on devices such as mobile phones. OMSI Forum provides such an application which is used to update the software on a phone. Software updates can be done at device management stations placed in stores or other service locations. Whenever a phone's software is updated, information about the update process is stored in a log. These logs can then be analyzed to generate statistics about updates such as the number of successful or failed updates or which faults that are common.</p><p>This master thesis project solves the problem of manually collecting and compiling logs from several stores by making this process automatic. Rather than collecting logs manually, each device management station sends its logs to a centralized server which stores all collected logs in a database. This makes it possible to generate charts and statistics in a simple manner from a web application. This solution makes the analysis more e ective, allowing users to concentrate on analyzing data by removing the work task of collecting logs.</p>
243

Numerical tools for the large eddy simulation of incompressible turbulent flows and application to flows over re-entry capsules/Outils numériques pour la simulation des grandes échelles d'écoulements incompressibles turbulents et application aux écoulements autour de capsules de rentrée

Rasquin, Michel 29 April 2010 (has links)
The context of this thesis is the numerical simulation of turbulent flows at moderate Reynolds numbers and the improvement of the capabilities of an in-house 3D unsteady and incompressible flow solver called SFELES to simulate such flows. In addition to this abstract, this thesis includes five other chapters. The second chapter of this thesis presents the numerical methods implemented in the two CFD solvers used as part of this work, namely SFELES and PHASTA. The third chapter concentrates on the implementation of a new library called FlexMG. This library allows the use of various types of iterative solvers preconditioned by algebraic multigrid methods, which require much less memory to solve linear systems than a direct sparse LU solver available in SFELES. Multigrid is an iterative procedure that relies on a series of increasingly coarser approximations of the original 'fine' problem. The underlying concept is the following: low wavenumber errors on fine grids become high wavenumber errors on coarser levels, which can be effectively removed by applying fixed-point methods on coarser levels. Two families of algebraic multigrid preconditioners have been implemented in FlexMG, namely smooth aggregation-type and non-nested finite element-type. Unlike pure gridless multigrid, both of these families use the information contained in the initial fine mesh. A hierarchy of coarse meshes is also needed for the non-nested finite element-type multigrid so that our approaches can be considered as hybrid. Our aggregation-type multigrid is smoothed with either a constant or a linear least square fitting function, whereas the non-nested finite element-type multigrid is already smooth by construction. All these multigrid preconditioners are tested as stand-alone solvers or coupled with a GMRES (Generalized Minimal RESidual) method. After analyzing the accuracy of the solutions obtained with our solvers on a typical test case in fluid mechanics (unsteady flow past a circular cylinder at low Reynolds number), their performance in terms of convergence rate, computational speed and memory consumption is compared with the performance of a direct sparse LU solver as a reference. Finally, the importance of using smooth interpolation operators is also underlined in this work. The fourth chapter is devoted to the study of subgrid scale models for the large eddy simulation (LES) of turbulent flows. It is well known that turbulence features a cascade process by which kinetic energy is transferred from the large turbulent scales to the smaller ones. Below a certain size, the smallest structures are dissipated into heat because of the effect of the viscous term in the Navier-Stokes equations. In the classical formulation of LES models, all the resolved scales are used to model the contribution of the unresolved scales. However, most of the energy exchanges between scales are local, which means that the energy of the unresolved scales derives mainly from the energy of the small resolved scales. In this fourth chapter, constant-coefficient-based Smagorinsky and WALE models are considered under different formulations. This includes a classical version of both the Smagorinsky and WALE models and several scale-separation formulations, where the resolved velocity field is filtered in order to separate the small turbulent scales from the large ones. From this separation of turbulent scales, the strain rate tensor and/or the eddy viscosity of the subgrid scale model is computed from the small resolved scales only. One important advantage of these scale-separation models is that the dissipation they introduce through their subgrid scale stress tensor is better controlled compared to their classical version, where all the scales are taken into account without any filtering. More precisely, the filtering operator (based on a top hat filter in this work) allows the decomposition u' = u - ubar, where u is the resolved velocity field (large and small resolved scales), ubar is the filtered velocity field (large resolved scales) and u' is the small resolved scales field. At last, two variational multiscale (VMS) methods are also considered. The philosophy of the variational multiscale methods differs significantly from the philosophy of the scale-separation models. Concretely, the discrete Navier-Stokes equations have to be projected into two disjoint spaces so that a set of equations characterizes the evolution of the large resolved scales of the flow, whereas another set governs the small resolved scales. Once the Navier-Stokes equations have been projected into these two spaces associated with the large and small scales respectively, the variational multiscale method consists in adding an eddy viscosity model to the small scales equations only, leaving the large scales equations unchanged. This projection is obvious in the case of a full spectral discretization of the Navier-Stokes equations, where the evolution of the large and small scales is governed by the equations associated with the low and high wavenumber modes respectively. This projection is more complex to achieve in the context of a finite element discretization. For that purpose, two variational multiscale concepts are examined in this work. The first projector is based on the construction of aggregates, whereas the second projector relies on the implementation of hierarchical linear basis functions. In order to gain some experience in the field of LES modeling, some of the above-mentioned models were implemented first in another code called PHASTA and presented along with SFELES in the second chapter. Finally, the relevance of our models is assessed with the large eddy simulation of a fully developed turbulent channel flow at a low Reynolds number under statistical equilibrium. In addition to the analysis of the mean eddy viscosity computed for all our LES models, comparisons in terms of shear stress, root mean square velocity fluctuation and mean velocity are performed with a fully resolved direct numerical simulation as a reference. The fifth chapter of the thesis focuses on the numerical simulation of the 3D turbulent flow over a re-entry Apollo-type capsule at low speed with SFELES. The Reynolds number based on the heat shield is set to Re=10^4 and the angle of attack is set to 180º, that is the heat shield facing the free stream. Only the final stage of the flight is considered in this work, before the splashdown or the landing, so that the incompressibility hypothesis in SFELES is still valid. Two LES models are considered in this chapter, namely a classical and a scale-separation version of the WALE model. Although the capsule geometry is axisymmetric, the flow field in its wake is not and induces unsteady forces and moments acting on the capsule. The characterization of the phenomena occurring in the wake of the capsule and the determination of their main frequencies are essential to ensure the static and dynamic stability during the final stage of the flight. Visualizations by means of 3D isosurfaces and 2D slices of the Q-criterion and the vorticity field confirm the presence of a large meandering recirculation zone characterized by a low Strouhal number, that is St≈0.15. Due to the detachment of the flow at the shoulder of the capsule, a resulting annular shear layer appears. This shear layer is then affected by some Kelvin-Helmholtz instabilities and ends up rolling up, leading to the formation of vortex rings characterized by a high frequency. This vortex shedding depends on the Reynolds number so that a Strouhal number St≈3 is detected at Re=10^4. Finally, the analysis of the force and moment coefficients reveals the existence of a lateral force perpendicular to the streamwise direction in the case of the scale-separation WALE model, which suggests that the wake of the capsule may have some preferential orientations during the vortex shedding. In the case of the classical version of the WALE model, no lateral force has been observed so far so that the mean flow is thought to be still axisymmetric after 100 units of non-dimensional physical time. Finally, the last chapter of this work recalls the main conclusions drawn from the previous chapters.
244

Systemization of RFID Tag Antenna Design Based on Optimization Techniques and Impedance Matching Charts

Butt, Munam 16 July 2012 (has links)
The performance of commercial Radio Frequency Identification (RFID) tags is primarily limited by present techniques used for tag antenna design. Currently, industry techniques rely on identifying the RFID tag application (books, clothing, etc.) and then building antenna prototypes of different configurations in order to satisfy minimum read range requirements. However, these techniques inherently lack an electromagnetic basis and are unable to provide a low cost solution to the tag antenna design process. RFID tag performance characteristics (read-range, chip-antenna impedance matching, surrounding environment) can be very complex, and a thorough understanding of the RFID tag antenna design may be gained through an electromagnetic approach in order to reduce the tag antenna size and the overall cost of the RFID system. The research presented in this thesis addresses RFID tag antenna design process for passive RFID tags. With the growing number of applications (inventory, supply-chain, pharmaceuticals, etc), the proposed RFID antenna design process demonstrates procedures to design tag antennas for such applications. Electrical/geometrical properties of the antennas designed were investigated with the help of computer electromagnetic simulations in order to achieve optimal tag performance criteria such as read range, chip-impedance matching, antenna efficiency, etc. Experimental results were performed on the proposed antenna designs to compliment computer simulations and analytical modelling.
245

TOP-K AND SKYLINE QUERY PROCESSING OVER RELATIONAL DATABASE

Samara, Rafat January 2012 (has links)
Top-k and Skyline queries are a long study topic in database and information retrieval communities and they are two popular operations for preference retrieval. Top-k query returns a subset of the most relevant answers instead of all answers. Efficient top-k processing retrieves the k objects that have the highest overall score. In this paper, some algorithms that are used as a technique for efficient top-k processing for different scenarios have been represented. A framework based on existing algorithms with considering based cost optimization that works for these scenarios has been presented. This framework will be used when the user can determine the user ranking function. A real life scenario has been applied on this framework step by step. Skyline query returns a set of points that are not dominated (a record x dominates another record y if x is as good as y in all attributes and strictly better in at least one attribute) by other points in the given datasets. In this paper, some algorithms that are used for evaluating the skyline query have been introduced. One of the problems in the skyline query which is called curse of dimensionality has been presented. A new strategy that based on the skyline existing algorithms, skyline frequency and the binary tree strategy which gives a good solution for this problem has been presented. This new strategy will be used when the user cannot determine the user ranking function. A real life scenario is presented which apply this strategy step by step. Finally, the advantages of the top-k query have been applied on the skyline query in order to have a quickly and efficient retrieving results.
246

Memory efficient approaches of second order for optimal control problems / Speichereffiziente Verfahren zweiter Ordnung für Probleme der optimalen Steuerung

Sternberg, Julia 16 December 2005 (has links) (PDF)
Consider a time-dependent optimal control problem, where the state evolution is described by an initial value problem. There are a variety of numerical methods to solve these problems. The so-called indirect approach is considered detailed in this thesis. The indirect methods solve decoupled boundary value problems resulting from the necessary conditions for the optimal control problem. The so-called Pantoja method describes a computationally efficient stage-wise construction of the Newton direction for the discrete-time optimal control problem. There are many relationships between multiple shooting techniques and Pantoja method, which are investigated in this thesis. In this context, the equivalence of Pantoja method and multiple shooting method of Riccati type is shown. Moreover, Pantoja method is extended to the case where the state equations are discretised using one of implicit numerical methods. Furthermore, the concept of symplecticness and Hamiltonian systems is introduced. In this regard, a suitable numerical method is presented, which can be applied to unconstrained optimal control problems. It is proved that this method is a symplectic one. The iterative solution of optimal control problems in ordinary differential equations by Pantoja or Riccati equivalent methods leads to a succession of triple sweeps through the discretised time interval. The second (adjoint) sweep relies on information from the first (original) sweep, and the third (final) sweep depends on both of them. Typically, the steps on the adjoint sweep involve more operations and require more storage than the other two. The key difficulty is given by the enormous amount of memory required for the implementation of these methods if all states throughout forward and adjoint sweeps are stored. One of goals of this thesis is to present checkpointing techniques for memory reduced implementation of these methods. For this purpose, the well known aspect of checkpointing has to be extended to a `nested checkpointing` for multiple transversals. The proposed nested reversal schedules drastically reduce the required spatial complexity. The schedules are designed to minimise the overall execution time given a certain total amount of storage for the checkpoints. The proposed scheduling schemes are applied to the memory reduced implementation of the optimal control problem of laser surface hardening and other optimal control problems. / Es wird ein Problem der optimalen Steuerung betrachtet. Die dazugehoerigen Zustandsgleichungen sind mit einer Anfangswertaufgabe definiert. Es existieren zahlreiche numerische Methoden, um Probleme der optimalen Steuerung zu loesen. Der so genannte indirekte Ansatz wird in diesen Thesen detailliert betrachtet. Die indirekten Methoden loesen das aus den Notwendigkeitsbedingungen resultierende Randwertproblem. Das so genannte Pantoja Verfahren beschreibt eine zeiteffiziente schrittweise Berechnung der Newton Richtung fuer diskrete Probleme der optimalen Steuerung. Es gibt mehrere Beziehungen zwischen den unterschiedlichen Mehrzielmethoden und dem Pantoja Verfahren, die in diesen Thesen detailliert zu untersuchen sind. In diesem Zusammenhang wird die aequivalence zwischen dem Pantoja Verfahren und der Mehrzielmethode vom Riccati Typ gezeigt. Ausserdem wird das herkoemlige Pantoja Verfahren dahingehend erweitert, dass die Zustandsgleichungen mit Hilfe einer impliziten numerischen Methode diskretisiert sind. Weiterhin wird das Symplektische Konzept eingefuehrt. In diesem Zusammenhang wird eine geeignete numerische Methode praesentiert, die fuer ein unrestringiertes Problem der optimalen Steuerung angewendet werden kann. In diesen Thesen wird bewiesen, dass diese Methode symplectisch ist. Das iterative Loesen eines Problems der optimalen Steuerung in gewoenlichen Differentialgleichungen mit Hilfe von Pantoja oder Riccati aequivalenten Verfahren fuehrt auf eine Aufeinanderfolge der Durchlaeufetripeln in einem diskretisierten Zeitintervall. Der zweite (adjungierte) Lauf haengt von der Information des ersten (primalen) Laufes, und der dritte (finale) Lauf haeng von den beiden vorherigen ab. Ueblicherweise beinhalten Schritte und Zustaende des adjungierten Laufes wesentlich mehr Operationen und benoetigen auch wesentlich mehr Speicherplatzkapazitaet als Schritte und Zustaende der anderen zwei Durchlaeufe. Das Grundproblem besteht in einer enormen Speicherplatzkapazitaet, die fuer die Implementierung dieser Methoden benutzt wird, falls alle Zustaende des primalen und des adjungierten Durchlaufes zu speichern sind. Ein Ziel dieser Thesen besteht darin, Checkpointing Strategien zu praesentieren, um diese Methoden speichereffizient zu implementieren. Diese geschachtelten Umkehrschemata sind so konstruiert, dass fuer einen gegebenen Speicherplatz die gesamte Laufzeit zur Abarbeitung des Umkehrschemas minimiert wird. Die aufgestellten Umkehrschemata wurden fuer eine speichereffiziente Implementierung von Problemen der optimalen Steuerung angewendet. Insbesondere betrifft dies das Problem einer Oberflaechenabhaertung mit Laserbehandlung.
247

Densités de copules archimédiennes hiérarchiques

Pham, David 04 1900 (has links)
Les copulas archimédiennes hiérarchiques ont récemment gagné en intérêt puisqu’elles généralisent la famille de copules archimédiennes, car elles introduisent une asymétrie partielle. Des algorithmes d’échantillonnages et des méthodes ont largement été développés pour de telles copules. Néanmoins, concernant l’estimation par maximum de vraisemblance et les tests d’adéquations, il est important d’avoir à disposition la densité de ces variables aléatoires. Ce travail remplie ce manque. Après une courte introduction aux copules et aux copules archimédiennes hiérarchiques, une équation générale sur les dérivées des noeuds et générateurs internes apparaissant dans la densité des copules archimédiennes hiérarchique. sera dérivée. Il en suit une formule tractable pour la densité des copules archimédiennes hiérarchiques. Des exemples incluant les familles archimédiennes usuelles ainsi que leur transformations sont présentés. De plus, une méthode numérique efficiente pour évaluer le logarithme des densités est présentée. / Nested Archimedean copulas recently gained interest since they generalize the well-known class of Archimedean copulas to allow for partial asymmetry. Sampling algorithms and strategies have been well investigated for nested Archimedean copulas. However, for likelihood based inference such as estimation or goodness-of-fit testing it is important to have the density. The present work fills this gap. After a short introduction on copula and nested Archimedean copulas, a general formula for the derivatives of the nodes and inner generators appearing in nested Archimedean copulas is developed. This leads to a tractable formula for the density of nested Archimedean copulas. Various examples including famous Archimedean families and transformations of such are given. Furthermore, a numerically efficient way to evaluate the log-density is presented.
248

Variation in prey availability and feeding success of larval Radiated Shanny (Ulvaria subbifurcata Storer) from Conception Bay, Newfoundland

Young, Kelly Victoria 10 July 2008 (has links)
Recruitment of pelagic fish populations is believed to be regulated during the planktonic larval stage due to high rates of mortality during the early life stages. Starvation is thought to be one of the main sources of mortality, despite the fact that there is rarely a strong correlation between the feeding success of larval fish and food availability as measured in the field. This lack of relationship may be caused in part by (i) inadequate sampling of larval fish prey and (ii) the use of total zooplankton abundance or biomass as proxies for larval food availability. Many feeding studies rely on measures of average prey abundance which do not adequately capture the variability, or patchiness, of the prey field as experienced by larval fish. Previous studies have shown that larvae may rely on these patches to increase their feeding success. I assess the variability in the availability of larval fish prey over a range of scales and model the small-scale distribution of prey in Conception Bay, Newfoundland. I show that the greatest variability in zooplankton abundance existed at the meter scale, and that larval fish prey were not randomly distributed within the upper mixed layer. This will impact both how well we can model the stochastic nature of larval fish cohorts, as well as how well we can study larval fish feeding from gut content analyses. Expanding on six years of previous lab and field studies on larval Radiated Shanny (Ulvaria subbifurcata) from Conception Bay, Newfoundland, I assess the feeding success, niche breadth (S) and weight-specific feeding rates (SPC, d-1) of the larvae to determine whether there are size-based patterns evident across the years. I found that both the amount of food in the guts and the niche breadth of larvae increased with larval size. There was a shift from low to high SPC with increasing larval size, suggesting that foraging success increases as the larvae grow. My results suggest that efforts should be made to estimate the variability of prey abundance at scales relevant to larval fish foraging rather than using large-scale average abundance estimates, since small-scale prey patchiness likely plays a role in larval fish feeding dynamics. In addition, the characteristics of zooplankton (density, size and behaviour) should be assessed as not all zooplankton are preyed upon equally by all sizes of larval fish. Overall, this thesis demonstrates that indices based on averages fail to account for the variability in the environment and in individual larval fish, which may be confounding the relationship between food availability and larval growth.
249

Array Signal Processing for Beamforming and Blind Source Separation

Moazzen, Iman 30 April 2013 (has links)
A new broadband beamformer composed of nested arrays (NAs), multi-dimensional (MD) filters, and multirate techniques is proposed for both linear and planar arrays. It is shown that this combination results in frequency-invariant response. For a given number of sensors, the advantage of using NAs is that the effective aperture for low temporal frequencies is larger than in the case of using uniform arrays. This leads to high spatial selectivity for low frequencies. For a given aperture size, the proposed beamformer can be implemented with significantly fewer sensors and less computation than uniform arrays with a slight deterioration in performance. Taking advantage of the Noble identity and polyphase structures, the proposed method can be efficiently implemented. Simulation results demonstrate the good performance of the proposed beamformer in terms of frequency-invariant response and computational requirements. The broadband beamformer requires a filter bank with a non-compatible set of sampling rates which is challenging to be designed. To address this issue, a filter bank design approach is presented. The approach is based on formulating the design problem as an optimization problem with a performance index which consists of a term depending on perfect reconstruction (PR) and a term depending on the magnitude specifications of the analysis filters. The design objectives are to achieve almost perfect reconstruction (PR) and have the analysis filters satisfying some prescribed frequency specifications. Several design examples are considered to show the satisfactory performance of the proposed method. A new blind multi-stage space-time equalizer (STE) is proposed which can separate narrowband sources from a mixed signal. Neither the direction of arrival (DOA) nor a training sequence is assumed to be available for the receiver. The beamformer and equalizer are jointly updated to combat both co-channel interference (CCI) and inter-symbol interference (ISI) effectively. Using subarray beamformers, the DOA, possibly time-varying, of the captured signal is estimated and tracked. The estimated DOA is used by the beamformer to provide strong CCI cancellation. In order to alleviate inter-stage error propagation significantly, a mean-square-error sorting algorithm is used which assigns detected sources to different stages according to the reconstruction error at different stages. Further, to speed up the convergence, a simple-yet-efficient DOA estimation algorithm is proposed which can provide good initial DOAs for the multi-stage STE. Simulation results illustrate the good performance of the proposed STE and show that it can effectively deal with changing DOAs and time variant channels. / Graduate / 0544 / imanmoaz@uvic.ca
250

Variation in prey availability and feeding success of larval Radiated Shanny (Ulvaria subbifurcata Storer) from Conception Bay, Newfoundland

Young, Kelly Victoria 10 July 2008 (has links)
Recruitment of pelagic fish populations is believed to be regulated during the planktonic larval stage due to high rates of mortality during the early life stages. Starvation is thought to be one of the main sources of mortality, despite the fact that there is rarely a strong correlation between the feeding success of larval fish and food availability as measured in the field. This lack of relationship may be caused in part by (i) inadequate sampling of larval fish prey and (ii) the use of total zooplankton abundance or biomass as proxies for larval food availability. Many feeding studies rely on measures of average prey abundance which do not adequately capture the variability, or patchiness, of the prey field as experienced by larval fish. Previous studies have shown that larvae may rely on these patches to increase their feeding success. I assess the variability in the availability of larval fish prey over a range of scales and model the small-scale distribution of prey in Conception Bay, Newfoundland. I show that the greatest variability in zooplankton abundance existed at the meter scale, and that larval fish prey were not randomly distributed within the upper mixed layer. This will impact both how well we can model the stochastic nature of larval fish cohorts, as well as how well we can study larval fish feeding from gut content analyses. Expanding on six years of previous lab and field studies on larval Radiated Shanny (Ulvaria subbifurcata) from Conception Bay, Newfoundland, I assess the feeding success, niche breadth (S) and weight-specific feeding rates (SPC, d-1) of the larvae to determine whether there are size-based patterns evident across the years. I found that both the amount of food in the guts and the niche breadth of larvae increased with larval size. There was a shift from low to high SPC with increasing larval size, suggesting that foraging success increases as the larvae grow. My results suggest that efforts should be made to estimate the variability of prey abundance at scales relevant to larval fish foraging rather than using large-scale average abundance estimates, since small-scale prey patchiness likely plays a role in larval fish feeding dynamics. In addition, the characteristics of zooplankton (density, size and behaviour) should be assessed as not all zooplankton are preyed upon equally by all sizes of larval fish. Overall, this thesis demonstrates that indices based on averages fail to account for the variability in the environment and in individual larval fish, which may be confounding the relationship between food availability and larval growth.

Page generated in 0.059 seconds