Spelling suggestions: "subject:"concave"" "subject:"oncave""
21 |
Implantable neural spheroid networks utilizing a concave microwell arrayChang, Joon Young January 2013 (has links)
The goal of this study was to create pre-formed neural spheroid networks (NSN) on a polydimethyl siloxane (PDMS) concave microwell array for eventual implantation into the rat brain. Recent studies have shown that stem cells have great potential in treating various neurological insults of the central nervous system, ranging from traumatic brain and spinal cord injury, to neurodegenerative disorders. However, the use of stem cell lines in research are controversial due to the method of obtaining cells, in their formation of teratomas and degeneration into cancer cells, their non-specific differentiation, and lastly in their inability to control the location of neural connections. A novel approach to address this issue utilizes pre-formed neural networks consisting of neural spheroids on polymer scaffolds for the implantation into the rat brain. Yet, it was observed that the cylindrical shape of the wells hindered the transfer process. This study aimed to overcome the lack of neural spheroid network detachment by utilizing concave well structures, using a simple method developed in this laboratory.
Primary neurons were isolated from pregnant Sprague Dawley rats at 16 ~ 17 days of gestation. Isolated neurons were cultured in PDMS wells with a concave structure and interconnected by rounded micro channels. It was reported previously that a concave structure enabled an easier and more efficient formation of spheroids, not to mention the ease in extraction of spheroid cells. Various studies have demonstrated the effectiveness of guidance channels in promoting neurite growth. Therefore, micro channels were integrated in the micro array design, and served as a guidance conduit to enhance neurite growth, and by association, spheroid interconnection.
The primary neurons formed a spheroid structure after 3 days, upon which they began to sprout new neurites. By day 8, neurite connections peaked. Spheroid diameter underwent an initial decrease then stabilized on day 2. Various well diameters (300~700 um) and channel lengths (1.5 x diameter ~ 3 x diameter) were evaluated, with a 300 um well diameter and 450 um center-to-center channel length found to be optimal. The completed network was assessed for interconnection using calcium imaging and showed coordinated calcium signals between the neural spheroids. The network was then successfully transferred to a collagen matrigel and cultured for a week. The methodology showed an improvement in the transfer of networks, with about a 90% extraction rate. The viability of the NSN on the matrigel was assessed using a Live/Dead assay, and cells were found to have greater than 95% viability. The optimal hydrophilicity was determined for neurite extension and transfer of NSNs onto the matrigel. It was found that an incubation time between 4~6 hours was optimal.
Future studies will involve the implantation of the NSN into the rat brain. Additionally, the use of neural progenitor and stem cell lines may provide an autologous source of cells which are immunocompatible with the host. In particular, marrow stromal cells are interesting in that they may also address the ethical concerns. A long term goal is to refine the methodology and apply this research to enable studies in the treatment of patients suffering from spinal cord injury and other neurodegenerative disorders.
|
22 |
The Effect of Projectile Nose Shape on the Formation of the Water Entry CavityEllis, Jeremy Conrad 01 June 2016 (has links)
This research focuses on the effect of several convex and concave nose shapes on cavity formation for both hydrophilic and hydrophobic projectiles. It specifically investigates the effect of convex shape on the threshold velocity for cavity formation as well as the effect of concave shapes on cavity formation in terms of impact velocity, geometry of the concave shape and wettability of the projectile. For the convex cases, the streamlined axisymmetric shape significantly increases the threshold velocity when cavities form and is most pronounced for the ogive and cone. The study demonstrates that measuring the wetting angle and impact velocity is not enough to predict cavity behavior, rather the roughness and nose shape must also be taken into consideration for convex projectiles. For the concave cases, the cavities formed are highly influenced by impact speed and nose shape. Wetting angle did not have any visible effect on the cavity formed at higher impact speeds (7 m/s). The dynamics of the cavity formation are dominated by the pocket of trapped air formed when the concave projectiles impact the water. At low impact speeds (~0-1 m/s) the trapped air can separate the flow from the leading edge of the projectile nose when venting out and cause a large cavity to form, depending on the specific concave shape and speed. At moderate impact speeds (1-4 m/s) the trapped air will vent completely underwater forming a small ring-shaped cavity. At high impact speeds (4-10 m/s) the trapped pocket of air compresses tremendously and causes an unsteady pressure pulse, which can result in the formation of a bubble and jet in front of the cavity. The jet is formed by water passing behind the pocket of trapped air along the walls of the concave nose and converging into a jet at the top of the concave shape and entraining the trapped air as it descends.
|
23 |
Statistical inference in high dimensional linear and AFT modelsChai, Hao 01 July 2014 (has links)
Variable selection procedures for high dimensional data have been proposed and studied by a large amount of literature in the last few years. Most of the previous research focuses on the selection properties as well as the point estimation properties. In this paper, our goal is to construct the confidence intervals for some low-dimensional parameters in the high-dimensional setting. The models we study are the partially penalized linear and accelerated failure time models in the high-dimensional setting. In our model setup, all variables are split into two groups. The first group consists of a relatively small number of variables that are more interesting. The second group consists of a large amount of variables that can be potentially correlated with the response variable. We propose an approach that selects the variables from the second group and produces confidence intervals for the parameters in the first group. We show the sign consistency of the selection procedure and give a bound on the estimation error. Based on this result, we provide the sufficient conditions for the asymptotic normality of the low-dimensional parameters. The high-dimensional selection consistency and the low-dimensional asymptotic normality are developed for both linear and AFT models with high-dimensional data.
|
24 |
Volume distribution and the geometry of high-dimensional random polytopesPivovarov, Peter 11 1900 (has links)
This thesis is based on three papers on selected topics in
Asymptotic Geometric Analysis.
The first paper is about the volume of high-dimensional random
polytopes; in particular, on polytopes generated by Gaussian random
vectors. We consider the question of how many random vertices (or
facets) should be sampled in order for such a polytope to capture
significant volume. Various criteria for what exactly it means to
capture significant volume are discussed. We also study similar
problems for random polytopes generated by points on the Euclidean
sphere.
The second paper is about volume distribution in convex bodies. The
first main result is about convex bodies that are (i) symmetric with
respect to each of the coordinate hyperplanes and (ii) in isotropic
position. We prove that most linear functionals acting on such
bodies exhibit super-Gaussian tail-decay. Using known facts about
the mean-width of such bodies, we then deduce strong lower bounds
for the volume of certain caps. We also prove a converse statement.
Namely, if an arbitrary isotropic convex body (not necessarily
satisfying the symmetry assumption (i)) exhibits similar
cap-behavior, then one can bound its mean-width.
The third paper is about random polytopes generated by sampling
points according to multiple log-concave probability measures. We
prove related estimates for random determinants and give
applications to several geometric inequalities; these include
estimates on the volume-radius of random zonotopes and Hadamard's
inequality for random matrices. / Mathematics
|
25 |
Green Supply Chain Design: A Lagrangian ApproachMerrick, Ryan J. 21 May 2010 (has links)
The expansion of supply chains into global networks has drastically increased the distance travelled along shipping lanes in a logistics system. Inherently, the increase in travel distances produces increased carbon emissions from transport vehicles. When increased emissions are combined with a carbon tax or emissions trading system, the result is a supply chain with increased costs attributable to the emission generated on the transportation routes. Most traditional supply chain design models do not take emissions and carbon costs into account. Hence, there is a need to incorporate emission costs into a supply chain optimization model to see how the optimal supply chain configuration may be affected by the additional expenses.
This thesis presents a mathematical programming model for the design of green supply chains. The costs of carbon dioxide (CO2) emissions were incorporated in the objective function, along with the fixed and transportation costs that are typically modeled in traditional facility location models. The model also determined the unit flows between the various nodes of the supply chain, with the objective of minimizing the total cost of the system by strategically locating warehouses throughout the network.
The literature shows that CO2 emissions produced by a truck are dependent on the weight of the vehicle and can be modeled using a concave function. Hence, the carbon emissions produced along a shipping lane are dependent upon the number of units and the weight of each unit travelling between the two nodes. Due to the concave nature of the emissions, the addition of the emission costs to the problem formulation created a nonlinear mixed integer programming (MIP) model.
A solution algorithm was developed to evaluate the new problem formulation. Lagrangian relaxation was used to decompose the problem by echelon and by potential warehouse site, resulting in a problem that required less computational effort to solve and allowed for much larger problems to be evaluated. A method was then suggested to exploit a property of the relaxed formulation and transform the problem into a linear MIP problem. The solution method computed the minimum cost for a complete network that would satisfy all the needs of the customers.
A primal heuristic was introduced into the Lagrangian algorithm to generate feasible solutions. The heuristic utilized data from the Lagrangian subproblems to produce good feasible solutions. Due to the many characteristics of the original problem that were carried through to the subproblems, the heuristic produced very good feasible solutions that were typically within 1% of the Lagrangian bound.
The proposed algorithm was evaluated through a number of tests. The rigidity of the problem and cost breakdown were varied to assess the performance of the solution method in many situations. The test results indicated that the addition of emission costs to a network can change the optimal configuration of the supply chain. As such, this study concluded that emission costs should be considered when designing supply chains in jurisdictions with carbon costs. Furthermore, the tests revealed that in regions without carbon costs it may be possible to significantly reduce the emissions produced by the supply chain with only a small increase in the cost to operate the system.
|
26 |
On minimally-supported D-optimal designs for polynomial regression with log-concave weight functionLin, Hung-Ming 29 June 2005 (has links)
This paper studies minimally-supported D-optimal designs for polynomial regression model with logarithmically concave (log-concave) weight functions.
Many commonly used weight functions in the design literature are log-concave.
We show that the determinant of information matrix of minimally-supported design is a log-concave function of ordered support points and the D-optimal design is unique. Therefore, the numerically D-optimal designs can be determined e¡Óciently by standard constrained concave programming algorithms.
|
27 |
A novel design of polishing tool for axially symmetrical surfaceYang, Jian-jhe 11 August 2006 (has links)
This thesis is to develop a novel polishing tool system fitted for convex and concave symmetrical surface of combined surface. There are two design goals in this system. First, this system can be used to polish a concave or convex cone surface with various dimensions and angle cones by adjusting its geometric feature of structure. Second, this polishing tool is expected to have high life expectancy in real applications. Because of the advantages, an efficiency and controllable polishing system would be developed.An inference process, based on a top-down planning strategy, was used to obtain the concept design of polishing tool. There are two major parts in the structure of polishing tool system. The first one is its elastic structure. Both its geometric configuration and loading applied at work are adjustable. The second one is the polishing tool of cylindrical shape. With this specific geometric feature, the effect of tool wear on polishing rate is minimized. The finite element method was adopted to analyze the deformation characteristics of the elastic structure. Accordingly, an optimal design about the shape and dimension of elastic structure was determined. The experimental study showed that the developed polishing system had the property of high repeatability in machining rate. It was also demonstrated that the machining rate of system was insensitive to the tool wear during the polishing process. This advantage may allow this system to gain significant benefit in reducing the need of tool replacement. Finally, it was shown that the experimental trends of machining rate due to the change of applied load or polishing speed followed that of cylindrical polishing system. Both of them can be properly predicted based on the lubricating theories.
|
28 |
Green Supply Chain Design: A Lagrangian ApproachMerrick, Ryan J. 21 May 2010 (has links)
The expansion of supply chains into global networks has drastically increased the distance travelled along shipping lanes in a logistics system. Inherently, the increase in travel distances produces increased carbon emissions from transport vehicles. When increased emissions are combined with a carbon tax or emissions trading system, the result is a supply chain with increased costs attributable to the emission generated on the transportation routes. Most traditional supply chain design models do not take emissions and carbon costs into account. Hence, there is a need to incorporate emission costs into a supply chain optimization model to see how the optimal supply chain configuration may be affected by the additional expenses.
This thesis presents a mathematical programming model for the design of green supply chains. The costs of carbon dioxide (CO2) emissions were incorporated in the objective function, along with the fixed and transportation costs that are typically modeled in traditional facility location models. The model also determined the unit flows between the various nodes of the supply chain, with the objective of minimizing the total cost of the system by strategically locating warehouses throughout the network.
The literature shows that CO2 emissions produced by a truck are dependent on the weight of the vehicle and can be modeled using a concave function. Hence, the carbon emissions produced along a shipping lane are dependent upon the number of units and the weight of each unit travelling between the two nodes. Due to the concave nature of the emissions, the addition of the emission costs to the problem formulation created a nonlinear mixed integer programming (MIP) model.
A solution algorithm was developed to evaluate the new problem formulation. Lagrangian relaxation was used to decompose the problem by echelon and by potential warehouse site, resulting in a problem that required less computational effort to solve and allowed for much larger problems to be evaluated. A method was then suggested to exploit a property of the relaxed formulation and transform the problem into a linear MIP problem. The solution method computed the minimum cost for a complete network that would satisfy all the needs of the customers.
A primal heuristic was introduced into the Lagrangian algorithm to generate feasible solutions. The heuristic utilized data from the Lagrangian subproblems to produce good feasible solutions. Due to the many characteristics of the original problem that were carried through to the subproblems, the heuristic produced very good feasible solutions that were typically within 1% of the Lagrangian bound.
The proposed algorithm was evaluated through a number of tests. The rigidity of the problem and cost breakdown were varied to assess the performance of the solution method in many situations. The test results indicated that the addition of emission costs to a network can change the optimal configuration of the supply chain. As such, this study concluded that emission costs should be considered when designing supply chains in jurisdictions with carbon costs. Furthermore, the tests revealed that in regions without carbon costs it may be possible to significantly reduce the emissions produced by the supply chain with only a small increase in the cost to operate the system.
|
29 |
Volume distribution and the geometry of high-dimensional random polytopesPivovarov, Peter Unknown Date
No description available.
|
30 |
A Rejection Technique for Sampling from T-Concave DistributionsHörmann, Wolfgang January 1994 (has links) (PDF)
A rejection algorithm - called transformed density rejection - that uses a new method for constructing simple hat functions for an unimodal, bounded density $f$ is introduced. It is based on the idea to transform $f$ with a suitable transformation $T$ such that $T(f(x))$ is concave. $f$ is then called $T$-concave and tangents of $T(f(x))$ in the mode and in a point on the left and right side are used to construct a hat function with table-mountain shape. It is possible to give conditions for the optimal choice of these points of contact. With $T=-1/\sqrt(x)$ the method can be used to construct a universal algorithm that is applicable to a large class of unimodal distributions including the normal, beta, gamma and t-distribution. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
Page generated in 0.2569 seconds