• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 265
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 624
  • 85
  • 80
  • 64
  • 62
  • 58
  • 56
  • 48
  • 46
  • 45
  • 40
  • 40
  • 39
  • 39
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

A generic neural network framework using design patterns

Van der Stockt, Stefan Aloysius Gert 28 August 2008 (has links)
Designing object-oriented software is hard, and designing reusable object-oriented software is even harder. This task is even more daunting for a developer of computational intelligence applications, as optimising one design objective tends to make others inefficient or even impossible. Classic examples in computer science include ‘storage vs. time’ and ‘simplicity vs. flexibility.’ Neural network requirements are by their very nature very tightly coupled – a required design change in one area of an existing application tends to have severe effects in other areas, making the change impossible or inefficient. Often this situation leads to a major redesign of the system and in many cases a completely rewritten application. Many commercial and open-source packages do exist, but these cannot always be extended to support input from other fields of computational intelligence due to proprietary reasons or failing to fully take all design requirements into consideration. Design patterns make a science out of writing software that is modular, extensible and efficient as well as easy to read and understand. The essence of a design pattern is to avoid repeatedly solving the same design problem from scratch by reusing a solution that solves the core problem. This pattern or template for the solution has well understood prerequisites, structure, properties, behaviour and consequences. CILib is a framework that allows developers to develop new computational intelligence applications quickly and efficiently. Flexibility, reusability and clear separation between components are maximised through the use of design patterns. Reliability is also ensured as the framework is open source and thus has many people that collaborate to ensure that the framework is well designed and error free. This dissertation discusses the design and implementation of a generic neural network framework that allows users to design, implement and use any possible neural network models and algorithms in such a way that they can reuse and be reused by any other computational intelligence algorithm in the rest of the framework, or any external applications. This is achieved by using object-oriented design patterns in the design of the framework. / Dissertation (MSc)--University of Pretoria, 2007. / Computer Science / unrestricted
482

Experimentelle und numerische Untersuchungen zur Verfahrensentwicklung des Unrunddrückens

Härtel, Sebastian 18 March 2013 (has links)
Um die wirtschaftliche Relevanz und die Flexibilität des Formdrückens zu erweitern, ist ein Maschinen- und Steuerungskonzept entwickelt worden, dass auch die Herstellung von nichtrotationssymmetrischen Bauteilen erlaubt. Neben experimentellen Untersuchungen zur Verfahrensentwicklung wurde ebenfalls eine numerische Verfahrensoptimierung des neu entwickelten Prozesses „Unrunddrücken“ durchgeführt, um unrunde Bauteile falten- und rissfrei sowie mit geringer Blechdickenreduktion herstellen zu können. In ersten experimentellen Untersuchungen wurden die technologischen Haupteinflussgrößen auf die Versagensformen Falten- und Rissbildung sowie die Blechausdünung ermittelt. Aufbauend auf diesen Ergebnissen ist ein kalibriertes Simulationsmodell erarbeitet worden, mit dem es möglich ist, das Prozessverständnis zu erhöhen und somit die Versagensfälle ganzheitlich über den Prozess zu betrachten. Die daraus gewonnenen Erkenntnisse wurden genutzt, um Optimierungsmaßnahmen für das Unrunddrücken abzuleiten. Es konnte experimentell nachgewiesen werden, dass sowohl die Falten- und Rissbildung als auch die Blechdickenreduktion durch die Optimierungsmaßnahmen deutlich reduziert werden können. Das in der vorliegenden Arbeit entwickelte Verfahren des Unrunddrückens stellt einen effizienten, kostengünstigen und vor allem flexiblen Fertigungsprozess für die Herstellung von nichtrotationssymmetrischen Bauteilen mit nahezu konstantem Blechdickenverlauf dar.
483

On control and estimation problems in antilock braking systems / Quelques problèmes de commande et d'estimation liés aux systèmes d'antiblocage des roues

Aguado rojas, Missie María del Rocío 14 June 2019 (has links)
Cette thèse aborde trois problèmes liés à l’ABS dans le cadre de la dynamique de la roue : l’estimation de la rigidité de freinage étendue (XBS) des pneus lors du freinage d’urgence, la commande de l’ABS basée sur l’estimation de l’XBS, et l’estimation de la vitesse et de l’accélération angulaires de la roue à partir des mesures provenant d’un codeur avec des imperfections. L’objectif général de ce travail est de développer des outils visant à améliorer la performance des systèmes de freinage, en utilisant des techniques adaptées de l'automatique non linéaire. La première partie de la thèse est consacrée à la construction d’un observateur adaptatif commuté pour l’XBS, c’est-à-dire un observateur adaptatif dont les gains d’estimation commutent entre deux valeurs possibles en fonction du signe de la sortie mesurée du système. La stabilité de l’observateur est analysée en utilisant des outils pour des systèmes commutés et en cascade, ainsi que des concepts tels qu’excitation permanente et transformations singulières d’échelle de temps. La deuxième partie de la thèse est dédiée à la conception d’une loi de commande pour l’ABS. L’objectif de contrôle est formulé en termes de l’XBS et une loi de commande hybride est conçue afin de faire en sorte que les trajectoires du système satisfassent les conditions requises pour l’estimation de l’XBS. La stabilité du contrôleur est analysée en utilisant l'application de Poincaré. La troisième partie de la thèse aborde la construction d’un algorithme pour estimer la vitesse et l’accélération angulaires de la roue et éliminer des perturbations qui sont introduites par les imperfections du codeur, et dont l’amplitude et la fréquence sont une fonction de la position, la vitesse, et l’accélération angulaires (réelles) de la roue. L’algorithme est basé sur la méthode connue comme « time-stamping algorithm », ainsi que sur des techniques de filtrage est d’estimation de paramètres. Des essais expérimentaux et des simulations numériques illustrent la performance des algorithmes d’estimation et de contrôle présentés dans cette thèse. Dans tous les cas nos résultats sont comparés par rapport à l’état de l’art. / This thesis addresses three problems related to the antilock braking system (ABS) in the context of the wheel dynamics: the estimation of the tyre extended braking stiffness (XBS) during an emergency braking situation, the control of the ABS based on the estimation of the XBS, and the estimation of the angular velocity and acceleration of the wheel from the measurements of an incremental encoder with imperfections. The general objective of this work is to develop tools aimed at improving the performance of braking systems by using techniques adapted from nonlinear control theory. The first part of the manuscript is devoted to the construction of a switched adaptive observer for the XBS, that is, an adaptive observer whose estimation gains switch between two possible values based on the sign of the system’s measured output. The stability of the observer is analyzed using tools for switched and cascaded systems, as well as concepts such as persistency of excitation and singular time-scale transformations. The second part of the manuscript is dedicated to the design of a control algorithm for the ABS. The control objective is formulated in terms of the XBS and a hybrid controller is designed so that the trajectories of the system satisfy the conditions required for the estimation of the XBS. The stability of the controller is analyzed using the Poincaré map. The third part of the manuscript focuses on the construction of an algorithm to estimate angular velocity and acceleration of the wheel and remove perturbations which are introduced by the encoder imperfections and whose amplitude and frequency are a function of the wheel's (real) position, velocity, and acceleration. The algorithm is based on the method known as time-stamping algorithm, as well as filtering and parameter estimation techniques. Experimental tests and numerical simulations illustrate the performance of the estimation and control algorithms presented in this thesis. In all cases our results are compared with respect to the state of the art.
484

Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications / Optimisation Non Convexe pour Modèles à Données Latentes : Algorithmes, Analyse et Applications

Karimi, Belhal 19 September 2019 (has links)
De nombreux problèmes en Apprentissage Statistique consistent à minimiser une fonction non convexe et non lisse définie sur un espace euclidien. Par exemple, les problèmes de maximisation de la vraisemblance et la minimisation du risque empirique en font partie.Les algorithmes d'optimisation utilisés pour résoudre ce genre de problèmes ont été largement étudié pour des fonctions convexes et grandement utilisés en pratique.Cependant, l'accrudescence du nombre d'observation dans l'évaluation de ce risque empirique ajoutée à l'utilisation de fonctions de perte de plus en plus sophistiquées représentent des obstacles.Ces obstacles requièrent d'améliorer les algorithmes existants avec des mis à jour moins coûteuses, idéalement indépendantes du nombre d'observations, et d'en garantir le comportement théorique sous des hypothèses moins restrictives, telles que la non convexité de la fonction à optimiser.Dans ce manuscrit de thèse, nous nous intéressons à la minimisation de fonctions objectives pour des modèles à données latentes, ie, lorsque les données sont partiellement observées ce qui inclut le sens conventionnel des données manquantes mais est un terme plus général que cela.Dans une première partie, nous considérons la minimisation d'une fonction (possiblement) non convexe et non lisse en utilisant des mises à jour incrémentales et en ligne. Nous proposons et analysons plusieurs algorithmes à travers quelques applications.Dans une seconde partie, nous nous concentrons sur le problème de maximisation de vraisemblance non convexe en ayant recourt à l'algorithme EM et ses variantes stochastiques. Nous en analysons plusieurs versions rapides et moins coûteuses et nous proposons deux nouveaux algorithmes du type EM dans le but d'accélérer la convergence des paramètres estimés. / Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters.
485

漸増動的解析(IDA)に基づく既設長大橋の耐震性能評価に関する研究

木田, 秀人 25 November 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18651号 / 工博第3960号 / 新制||工||1609(附属図書館) / 31565 / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 杉浦 邦征, 教授 白土 博通, 教授 五十嵐 晃 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
486

On the Identification of Favorable Data Profile for Lithium-Ion Battery Aging Assessment with Consideration of Usage Patterns in Electric Vehicles

Huang, Meng January 2019 (has links)
No description available.
487

Global finite-time observers for a class of nonlinear systems

Li, Yunyan January 2013 (has links)
The contributions of this thesis lie in the area of global finite-time observer design for a class of nonlinear systems with bounded rational and mixed rational powers imposed on the incremental rate of the nonlinear terms whose solutions exist and are unique for all positive time. In the thesis, two different kinds of nonlinear global finite-time observers are designed by employing of finite-time theory and homogeneity properties with different methods. The global finite-time stability of both proposed observers is derived on the basis of Lyapunov theory. For a class of nonlinear systems with rational and mixed rational powers imposed on the nonlinearities, the first global finite-time observers are designed, where the global finite-time stability of the observation systems is achieved from two parts by combining asymptotic stability and local finitetime stability. The proposed observers can only be designed for the class of nonlinear systems with dimensions greater than 3. The observers have a dynamic high gain and two homogenous terms, one homogeneous of degree greater than 1 and the other of degree less than 1. In order to prove the global finite-time stability of the proposed results, two homogeneous Lyapunov functions are provided, corresponding with the two homogeneous items. One is homogeneous of degree greater than 1, which makes the observation error systems converging into a spherical area around the origin, and the other is of degree less than 1, which ensures local finite-time stability. The second global finite-time observers are also proposed based on the high-gain technique, which does not place any limitation on the dimension of the nonlinear systems. Compared with the first global finite-time observers, the newly designed observers have only one homogeneous term and a new gain update law where two new terms are introduced to dominate some terms in the nonlinearities and ensure global finite-time stability as well. The global finite-time stability is obtained directly based on a sufficient condition of finite-time stability and only one Lyapunov function is employed in the proof. The validity of the two kinds of global finite-time observers that have been designed is illustrated through some simulation results. Both of them can make the observation error systems converge to the origin in finite-time. The parameters, initial conditions as well as the high gain do have some impact on the convergence time, where the high gain plays a stronger role. The bigger the high gain is, the shorter the time it needs to converge. In order to show the performance of the two kinds of observers more clearly, two examples are provided and some comparisons are made between them. Through these, it can be seen that under the same parameters and initial conditions, although the amplitude of the observation error curve is slightly greater, the global finite-time observers with a new gain update law can make the observation error systems converge much more quickly than the global finite-time observers with two homogeneous terms. In the simulation results, one can see that, as a common drawback of high gain observers, they are noise-sensitive. Finding methods to improve their robustness and adaptiveness will be quite interesting, useful and challenging. / Thesis (PhD)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
488

Innovation Management Systemicity : How Systemic Dimensions of Innovation Management Influence Innovation Capabilities / Systemiskhet i innovationsledning : Hur systemiska dimensioner av innovationsledning påverkar innovationsförmåga

Lundbäck, Linnéa, Sundin, Linnea January 2023 (has links)
The field of innovation management has gained extensive knowledge; however, there has been a tendency to study its various aspects in isolation rather than with a systems perspective, resulting in a potential oversight of interconnections between important aspects. While systems perspectives have been employed in organization and management research for over half a century, systems perspectives have more recently gained relevance in innovation management research. A sign of the growing relevance is the creation of an international guidance standard for innovation management systems, ISO 56002, published in 2019. Research indicates that implementing innovation management systems, with or without using a standard, has facilitated the transition from ad hoc practices to more integrated ones. Thus, it is interesting to investigate how systems approaches influence innovation capabilities. Furthermore, managing radical and incremental innovation may require different approaches, which presents challenges, particularly for large companies that prioritize projects typically involving incremental innovation. Consequently, studying how incremental and radical innovation are influenced when managing both types within the same system becomes relevant. In this study, the concept of systemicity was used to describe how systemic a system is through three system dimensions comprehensiveness, coherence and correspondence, which overlap with those commonly used in literature to describe and define a system. The purpose of the study was to gain a better understanding of innovation management from a systems perspective by investigating how different systemic dimensions influence and relate to innovation capabilities, analysing these capabilities from a systems perspective, exploring hindrances and opportunities with a systems approach, and investigating the relationship between these systemic dimensions and incremental and radical innovation capabilities respectively. The study was part of a research project with the overarching aim to investigate important future development trends and evaluate the innovation capabilities of the Swedish metallic materials industryto use as a basis to develop strategies for how the industry can support the transition towards sustainability. A multiple-case study of two companies within the metallic materials industry was conducted using an abductive approach, including a literature study, data collection, coding, and analysis. Semi-structured interviews were employed, where interview questions were based on the ISO 56002 standard and its seven system elements. The analysis involved within-case and cross-case analysis. The findings from the study imply that having capabilities related to all seven system elements seem to be important for an organization's overall innovation capability, highlighting the significance of comprehensiveness. The interconnections between elements highlight the significance of coherence, and establishing correspondence between capabilities and goals enhances innovation capability. While exhibiting comprehensiveness, coherence, and correspondence is beneficial, the suitable level of systemicity remains uncertain. Further research is needed to determine the balance between systemicity and flexibility for effective support of innovation capabilities. Furthermore, in relation to managing incremental and radical innovation together, the study indicates the importance of considering systemicity to identify opportunities and hindrances when managing both types in the same system. The analysis of comprehensiveness highlights the need for distinct management approaches and capabilities for each innovation type. The analysis of coherence emphasizes the importance of considering interdependencies between system elements to avoid obstacles resulting from interdependencies being overlooked. The analysis of correspondence suggests separate goal setting for incremental and radical innovation due to their distinct objectives and outcomes. Based on the method and findings, a tentative framework was developed for analysis and evaluation of innovation management systemicity. / Trots att det finns mycket kunskap om innovationsledning har det funnits en tendens att studera olika aspekter inom området separat snarare än med ett systemperspektiv, vilket har resulterat i att viktiga samband mellan olika aspekter potentiellt har förbisetts. Systemperspektiv har använts i över 50 år inom organisationsforskning men har på senare år fått ökad relevans inom forskningen om innovationsledning. Ett tecken på denna ökande relevans är skapandet av en internationell vägledande standard för innovationsledningssystem, ISO 56002, som publicerades 2019. Forskning har visat att implementering av innovationsledningssystem genom att använda en standard eller inte, har underlättat för organisationer att övergå från ad hoc till mer integrerade innovationsledningsmetoder. Därför är det intressant att studera hur innovationsförmåga påverkas av att ta en systemansats till innovationsledning. Vidare kan hantering av radikal och inkrementell innovation kräva olika tillvägagångssätt, vilket skapar utmaningar, särskilt för stora företag eftersom de prioriterar projekt som oftast inkluderar inkrementell innovation. Därför är det relevant att studera hur inkrementell respektive radikal innovation påverkas av att hanteras inom samma system. I denna studie används begreppet systemiskhet (eng: systemicity) för att beskriva hur systemiskt ett system är genom tre systemdimensioner: omfattning (eng: comprehensiveness), sammankoppling (eng: coherence) och överrensstämmelse (eng: correspondence), vilka överlappar med dimensioner som vanligtvis används i litteraturen för att beskriva och definiera ett system. Syftet med studien var att få en bättre förståelse för innovationsledning ur ett systemperspektiv genom att undersöka hur olika systemdimensioner påverkar och relaterar till innovationsförmåga, analysera dessa förmågor ur ett systemperspektiv, utforska hinder och möjligheter med en systemansats, samt undersöka förhållandet mellan dessa systemdimensioner och inkrementell respektive radikal innovationsförmåga. Studien ingick i ett forskningsprojekt med det övergripande syftet att undersöka viktiga framtida utvecklingstrender och utvärdera den svenska metallindustrins innovationsförmåga för att använda som grund för att ta fram strategier för hur industrin kan stödja omställningen mot hållbarhet. En flerfallsstudie av två företag inom metallindustrin genomfördes med en abduktiv metod, inklusive en litteraturstudie, datainsamling, kodning och analys. Semistrukturerade intervjuer användes, där intervjufrågorna baserades på ISO 56002 och dess sju systemelement. Analysen gjordes i två steg genom att först analysera varje företag för sig och sedan jämföra de två fallen med varandra. Resultaten visar på vikten av att ha förmågor relaterade till alla sju systemelementen för en organisations övergripande innovationsförmåga, vilket lyfter fram betydelsen av dimensionen omfattning. Sammankopplingarna mellan element framhäver betydelsen av systemdimensionen sammankoppling, och att etablera överensstämmelse mellan förmågor och mål förbättrar innovationsförmågan. Även om studien pekar på att det är fördelaktigt för en organisation att uppvisa omfattning, sammankoppling och överrensstämmelse i sitt arbete med innovationsledning, kan resultaten inte användas för att dra slutsatser om vilken nivå av systemiskhet som är lämplig. Därför behövs ytterligare forskning för att fastställa balansen mellan systemiskhet och flexibilitet för att effektivt stödja innovationsförmåga. I relation till att hantera inkrementell och radikal innovation tillsammans, visar studien på vikten av att ta systemiskhet i beaktning för att identifiera möjligheter och hinder vid hantering av båda typerna av innovation i samma system. Analysen av omfattning belyser behovet av olika hantering och olika innovationsförmågor i relation till varje innovationstyp. Analysen av sammankoppling betonar vikten av att studera beroenden mellan systemelement för att undvika att hinder som uppstår av kopplingarna blir förbisedda. Analysen av överrensstämmelse föreslår separata målsättningar för inkrementell och radikal innovation på grund av deras distinkta mål och resultat. Baserat på metoden och resultaten har ett tentativt ramverk tagits fram i syfte att användas för analys och utvärdering av systemiskhet i innovationsledning.
489

Incremental Learning of Deep Convolutional Neural Networks for Tumour Classification in Pathology Images

Johansson, Philip January 2019 (has links)
Medical doctors understaffing is becoming a compelling problem in many healthcare systems. This problem can be alleviated by utilising Computer-Aided Diagnosis (CAD) systems to substitute doctors in different tasks, for instance, histopa-thological image classification. The recent surge of deep learning has allowed CAD systems to perform this task at a very competitive performance. However, a major challenge with this task is the need to periodically update the models with new data and/or new classes or diseases. These periodical updates will result in catastrophic forgetting, as Convolutional Neural Networks typically requires the entire data set beforehand and tend to lose knowledge about old data when trained on new data. Incremental learning methods were proposed to alleviate this problem with deep learning. In this thesis, two incremental learning methods, Learning without Forgetting (LwF) and a generative rehearsal-based method, are investigated. They are evaluated on two criteria: The first, capability of incrementally adding new classes to a pre-trained model, and the second is the ability to update the current model with an new unbalanced data set. Experiments shows that LwF does not retain knowledge properly for the two cases. Further experiments are needed to draw any definite conclusions, for instance using another training approach for the classes and try different combinations of losses. On the other hand, the generative rehearsal-based method tends to work for one class, showing a good potential to work if better quality images were generated. Additional experiments are also required in order to investigating new architectures and approaches for a more stable training.
490

Compilation Techniques, Algorithms, and Data Structures for Efficient and Expressive Data Processing Systems

Supun Madusha Bandara Abeysinghe Tennakoon Mudiyanselage (17454786) 30 November 2023 (has links)
<pre>The proliferation of digital data, driven by factors like social media, e-commerce, etc., has created an increasing demand for highly processed data at higher levels of fidelity, which puts increasing demands on modern data processing systems. In the past, data processing systems faced bottlenecks due to limited main memory availability. However, as main memory becomes more abundant, their optimization focus has shifted from disk I/O to optimized computation through techniques like compilation. This dissertation addresses several critical limitations within such compilation-based data processing systems.<br><br>In modern data analytics pipelines, combination of workloads from various paradigms, such as traditional DBMS and Machine Learning, is common. <br>These pipelines are typically managed by specialized systems designed for specific workload types. While these specialized systems optimize their individual performance, substantial performance loss occurs when they are combined to handle mixed workloads. This loss is mainly due to overheads at system boundaries, including data copying and format conversions, as well as the general inability to perform cross-system optimizations.<br><br>This dissertation tackles this problem in two angles. First, it proposes an efficient post-hoc integration of individual systems using generative programming via the construction of common intermediate layers. This approach preserves the best-of-breed performance of individual workloads while achieving state-of-the-art performance for combined workloads. Second, we introduce a high-level query language capable of expressing various workload types, acting as a general substrate to implement combined workloads. This allows the generation of optimized code for end-to-end workloads through<br>the construction of an intermediate representation (IR).<br><br>The dissertation then shifts focus to data processing systems used for incremental view maintenance (IVM). While existing IVM systems achieve high performance through compilation and novel algorithms, they have limitations in handling specific query classes. Notably, they are incapable of handling queries involving correlated nested aggregate subqueries. To address this, our work proposes a novel indexing scheme based on a new data structure and a corresponding set of algorithms that fully incrementalize such queries. This approach result in substantial asymptotic speedups and order-of-magnitude performance improvements for workloads of practical importance.<br><br>Finally, the dissertation explores efficient and expressive fixed-point computations, with a focus on Datalog--a language widely used for declarative program analysis. Although existing Datalog engines rely on compilation and specialized code generation to achieve performance, they lack the flexibility to support extensions required for complex program analysis. Our work introduces a new Datalog engine built using generative programming techniques that offers both flexibility and state-of-the-art performance through specialized code generation.</pre><p></p>

Page generated in 0.076 seconds