Spelling suggestions: "subject:"incremental"" "subject:"ncremental""
411 |
Síntesis Audiovisual Realista PersonalizableMelenchón Maldonado, Javier 13 July 2007 (has links)
Es presenta un esquema únic per a la síntesi i anàlisi audiovisual personalitzable realista de seqüències audiovisuals de cares parlants i seqüències visuals de llengua de signes en àmbit domèstic. En el primer cas, amb animació totalment sincronitzada a través d'una font de text o veu; en el segon, utilitzant la tècnica de lletrejar paraules mitjançant la ma. Les seves possibilitats de personalització faciliten la creació de seqüències audiovisuals per part d'usuaris no experts. Les aplicacions possibles d'aquest esquema de síntesis comprenen des de la creació de personatges virtuals realistes per interacció natural o vídeo jocs fins vídeo conferència des de molt baix ample de banda i telefonia visual per a les persones amb problemes d'oïda, passant per oferir ajuda a la pronunciació i la comunicació a aquest mateix col·lectiu. El sistema permet processar seqüències llargues amb un consum de recursos molt reduït, sobre tot, en el referent a l'emmagatzematge, gràcies al desenvolupament d'un nou procediment de càlcul incremental per a la descomposició en valors singulars amb actualització de la informació mitja. Aquest procediment es complementa amb altres tres: el decremental, el de partició i el de composició. / Se presenta un esquema único para la síntesis y análisis audiovisual personalizable realista de secuencias audiovisuales de caras parlantes y secuencias visuales de lengua de signos en entorno doméstico. En el primer caso, con animación totalmente sincronizada a través de una fuente de texto o voz; en el segundo, utilizando la técnica de deletreo de palabras mediante la mano. Sus posibilidades de personalización facilitan la creación de secuencias audiovisuales por parte de usuarios no expertos. Las aplicaciones posibles de este esquema de síntesis comprenden desde la creación de personajes virtuales realistas para interacción natural o vídeo juegos hasta vídeo conferencia de muy bajo ancho de banda y telefonía visual para las personas con problemas de oído, pasando por ofrecer ayuda en la pronunciación y la comunicación a este mismo colectivo. El sistema permite procesar secuencias largas con un consumo de recursos muy reducido gracias al desarrollo de un nuevo procedimiento de cálculo incremental para la descomposición en valores singulares con actualización de la información media. / A shared framework for realistic and personalizable audiovisual synthesis and analysis of audiovisual sequences of talking heads and visual sequences of sign language is presented in a domestic environment. The former has full synchronized animation using a text or auditory source of information; the latter consists in finger spelling. Their personalization capabilities ease the creation of audiovisual sequences by non expert users. The applications range from realistic virtual avatars for natural interaction or videogames to low bandwidth videoconference and visual telephony for the hard of hearing, including help to speech therapists. Long sequences can be processed with reduced resources, specially storing ones. This is allowed thanks to the proposed scheme for the incremental singular value decomposition with mean preservation. This scheme is complemented with another three: the decremental, the split and the composed ones.
|
412 |
Incremental sheet forming process : control and modellingWang, Hao January 2014 (has links)
Incremental Sheet Forming (ISF) is a progressive metal forming process, where the deformation occurs locally around the point of contact between a tool and the metal sheet. The final work-piece is formed cumulatively by the movements of the tool, which is usually attached to a CNC milling machine. The ISF process is dieless in nature and capable of producing different parts of geometries with a universal tool. The tooling cost of ISF can be as low as 5–10% compared to the conventional sheet metal forming processes. On the laboratory scale, the accuracy of the parts created by ISF is between ±1.5 mm and ±3mm. However, in order for ISF to be competitive with a stamping process, an accuracy of below ±1.0 mm and more realistically below ±0.2 mm would be needed. In this work, we first studied the ISF deformation process by a simplified phenomenal linear model and employed a predictive controller to obtain an optimised tool trajectory in the sense of minimising the geometrical deviations between the targeted shape and the shape made by the ISF process. The algorithm is implemented at a rig in Cambridge University and the experimental results demonstrate the ability of the model predictive controller (MPC) strategy. We can achieve the deviation errors around ±0.2 mm for a number of simple geometrical shapes with our controller. The limitations of the underlying linear model for a highly nonlinear problem lead us to study the ISF process by a physics based model. We use the elastoplastic constitutive relation to model the material law and the contact mechanics with Signorini’s type of boundary conditions to model the process, resulting in an infinite dimensional system described by a partial differential equation. We further developed the computational method to solve the proposed mathematical model by using an augmented Lagrangian method in function space and discretising by finite element method. The preliminary results demonstrate the possibility of using this model for optimal controller design.
|
413 |
Implementation of decision trees for embedded systemsBadr, Bashar January 2014 (has links)
This research work develops real-time incremental learning decision tree solutions suitable for real-time embedded systems by virtue of having both a defined memory requirement and an upper bound on the computation time per training vector. In addition, the work provides embedded systems with the capabilities of rapid processing and training of streamed data problems, and adopts electronic hardware solutions to improve the performance of the developed algorithm. Two novel decision tree approaches, namely the Multi-Dimensional Frequency Table (MDFT) and the Hashed Frequency Table Decision Tree (HFTDT) represent the core of this research work. Both methods successfully incorporate a frequency table technique to produce a complete decision tree. The MDFT and HFTDT learning methods were designed with the ability to generate application specific code for both training and classification purposes according to the requirements of the targeted application. The MDFT allows the memory architecture to be specified statically before learning takes place within a deterministic execution time. The HFTDT method is a development of the MDFT where a reduction in the memory requirements is achieved within a deterministic execution time. The HFTDT achieved low memory usage when compared to existing decision tree methods and hardware acceleration improved the performance by up to 10 times in terms of the execution time.
|
414 |
Sjukvårdskostnader i samband medvägtrafikolyckor för individer med och utansömnapné / Healthcare costs associated with road traffic accidentsinvolving individuals with and without ObstructiveSleep ApnoeaKhan, Ellen, Steen, Denise January 2016 (has links)
Tidigare forskning indikerar att individer med sömnapné får mer allvarliga skador i sambandmed vägtrafikolyckor jämfört med individer utan sömnapné. Det har även visats att merallvarliga skador genererar högre kostnader. Det finns dock ett kunskapsglapp i frågan om demer allvarliga skadorna som involverar individer med sömnapné har högresjukvårdskostnader i Sverige. Studier visar även att om patienter ska kunna få ut maximaltmed vård för de skattepengar som läggs på hälso- och sjukvården bör samhällseffektivakostnadsanalyser göras. En del i samhällseffektiva kostnadsanalyser är att identifiera,kvantifiera och värdera de kostnader som är relevanta vid ett specifikt olycksfall.Uppsatsens syfte är att framställa, jämföra och analysera de sjukvårdskostnader somuppkommer vid vägtrafikolyckor i Sverige orsakade av individer med och utan sömnapnéunder en uppföljningsperiod på ett, två och tre år. I sjukvårdskostnaderna inkluderas dekostnader som uppstår i slutenvården, öppenvården, samt läkemedelskostnader. Syftetuppfylls först genom identifiering, kvantifiering och värdering av sjukvårdskostnaderna. Meden ekonometrisk modell avser vi dessutom att förklara sambandet mellan de förklarandevariablerna ålder, kön och patientgrupp och den beroende variabeln sjukvårdskostnader. Denekonometriska modellen skapas utifrån data från olycksregistret Swedish Traffic AccidentData Acquisition (STRADA) och organisationen European Sleep Apnoea Database(ESADA).I resultatanalysen presenteras och analyseras sjukvårdskostnaderna utifrånsamhällsekonomisk teori för att avgöra om det existerar en skillnad i sjukvårdskostnader samtför att utreda hur den eventuella skillnaden är fördelad enligt Pareto- och Kaldor-Hickskriteriet. Studiens resultat visar på att det existerar en framträdande skillnad mellan de tvåpatientgruppernas sjukvårdskostnader. Marginalkostnaderna för patientgruppen medsömnapné är betydligt större än för patientgruppen utan sömnapné och de inkrementellakostnaderna visar också relativt stora kostnadsskillnader under respektive uppföljningsår. / Previous research indicates that individuals with the condition obstructive sleep apnoea(OSA) get more severe injuries after road traffic accidents, in comparison with individualswithout OSA. It has also, in previous studies, been shown that more severe injuries generatehigher costs. There is although a knowledge gap concerning whether the more severe injuriesthat involve individuals with OSA result in higher medical expenses. Furthermore, earlierresearch also implies that for patients to receive maximum healthcare from the tax moneyreimbursing the healthcare in Sweden, there should be socio-effective cost analysisconducted. An important part of socio-effective cost analysis is the identification,quantification and valuation of relevant costs associated with a specific causality.The aim of the study is to produce, compare and analyse the healthcare costs associated withroad traffic accidents in Sweden caused by individuals with and without OSA, during afollow-up period of one, two and three years. The healthcare costs include the costs that occurin the inpatient and outpatient care as well as pharmaceutical costs. The aim has been fulfilledthrough identification, quantification and valuation of the healthcare costs associated withroad traffic accidents, for individuals with and without OSA. In order to examine therelationship between the describing variables age, sex and patient group and the dependentvariable healthcare cost, we constructed an econometric model. The econometric model hasbeen assembled by data from the accident register Swedish Traffic Accident Data Acquisition(STRADA) and the organisation European Sleep Apnoea Database (ESADA).The result of the study presents and analyse the healthcare costs through socio-economictheory to decide whether there does exist a difference in healthcare costs and to investigate ifthe eventual difference in costs is distributed according to the Pareto- and Kaldor-Hickscriteria.The study’s result demonstrates a significant difference of the healthcare costs in thetwo patient groups. The marginal costs for the patient group with OSA is considerably higherthan the marginal cost for the patient group without OSA. The incremental costs also showrelatively large cost differences during the three follow-up years.
|
415 |
String-averaging incremental subgradient methods for constrained convex optimization problems / Média das sequências e métodos de subgradientes incrementais para problemas de otimização convexa com restriçõesOliveira, Rafael Massambone de 12 July 2017 (has links)
In this doctoral thesis, we propose new iterative methods for solving a class of convex optimization problems. In general, we consider problems in which the objective function is composed of a finite sum of convex functions and the set of constraints is, at least, convex and closed. The iterative methods we propose are basically designed through the combination of incremental subgradient methods and string-averaging algorithms. Furthermore, in order to obtain methods able to solve optimization problems with many constraints (and possibly in high dimensions), generally given by convex functions, our analysis includes an operator that calculates approximate projections onto the feasible set, instead of the Euclidean projection. This feature is employed in the two methods we propose; one deterministic and the other stochastic. A convergence analysis is proposed for both methods and numerical experiments are performed in order to verify their applicability, especially in large scale problems. / Nesta tese de doutorado, propomos novos métodos iterativos para a solução de uma classe de problemas de otimização convexa. Em geral, consideramos problemas nos quais a função objetivo é composta por uma soma finita de funções convexas e o conjunto de restrições é, pelo menos, convexo e fechado. Os métodos iterativos que propomos são criados, basicamente, através da junção de métodos de subgradientes incrementais e do algoritmo de média das sequências. Além disso, visando obter métodos flexíveis para soluções de problemas de otimização com muitas restrições (e possivelmente em altas dimensões), dadas em geral por funções convexas, a nossa análise inclui um operador que calcula projeções aproximadas sobre o conjunto viável, no lugar da projeção Euclideana. Essa característica é empregada nos dois métodos que propomos; um determinístico e o outro estocástico. Uma análise de convergência é proposta para ambos os métodos e experimentos numéricos são realizados a fim de verificar a sua aplicabilidade, principalmente em problemas de grande escala.
|
416 |
Open Service Innovation in Industrial NetworksMyhrén, Per January 2019 (has links)
Constant development of new technologies in a rapidly changing and globalized world decreases product life cycles. Time-to-market is crucial for commercial success. This development requires resources to create new knowledge and skills within organizations and together in networks with other firms. Open innovation is an alternative for developing innovative products and services that takes advantage of external knowledge and give access to new market channels. Even though services is vital for economic growth and fits well with the open innovation model, there is little research on open service innovation. The purpose of the thesis is to extend knowledge on how service innovations emerge and evolve in open innovation nets in industrial networks. It also aims to follow the development from idea to a commercial service. The thesis describes organization for service innovations to emerge and develop in open service innovation nets. It also explains the actors involved and their different innovator roles in the development from idea to commercial services. The present research provide insights how the organization of the development work might differ between incremental and radical service innovation. there is a range of organizing templates (archetypes) that fit different types of development work. Where previous research on open service innovation has focused on radical service innovation present research suggests that open service innovation also can be a strategy for incremental service innovation. Present research shows how actors take on multiple innovator roles in the innovation process of open service innovation. The more radical changes, the more roles each actor takes on. Present research add a new innovator role to previous research, The Constitutional Monarch. The Constitutional Monarch has a central position in all archetypes, but as the name implies, has no decision power. The research also sheds light on how the hub firm deploys not one but a portfolio of network orchestration processes dependent on the archetype used for open service innovation. / The development of new technologies in a rapidly changing and globalized world decreases product life cycles, time to market is crucial. Firms can no longer rely solely on internal knowledge in new product-/service development. They require external resources to create new knowledge and skills within their organizations. Developing innovative products and services that takes advantage of external knowledge and give access to new market channels is labeled open innovation. Even though the open innovation model is well known and widely spread, there is little research on open service innovation. The aim of the thesis is to understand and describe how service innovations emerge and evolve in open innovation nets (groups) in industrial networks, and to follow the development from idea to a commercial service. The thesis describes organization for service innovations to emerge and develop in open service innovation nets. It also explains the actors involved and their different innovation roles in the development of service innovations in open service innovation nets. The present research provide insights how the organization of the development work might differ between incremental and radical service innovation. It suggests that open service innovation can be a strategy not only for radical but also for incremental service innovation. The thesis also present a new innovator role to add to existing research, The Constitutional Monarch. The Constitutional Monarch has a central position as third-party facilitator catalyzing the innovation process but has no decision power.
|
417 |
Application des architectures many core dans les systèmes embarqués temps réel / Implementing a Real-time Avionic application on a Many-core ProcessorLo, Moustapha 22 February 2019 (has links)
Les processeurs mono-coeurs traditionnels ne sont plus suffisants pour répondre aux besoins croissants en performance des fonctions avioniques. Les processeurs multi/many-coeurs ont emergé ces dernières années afin de pouvoir intégrer plusieurs fonctions et de bénéficier de la puissance par Watt disponible grâce aux partages de ressources. En revanche, tous les processeurs multi/many-coeurs ne répondent pas forcément aux besoins des fonctions avioniques. Nous préférons avoir plus de déterminisme que de puissance de calcul car la certification de ces processeurs passe par la maîtrise du déterminisme. L’objectif de cette thèse est d’évaluer le processeur many-coeur (MPPA-256) de Kalray dans un contexte industriel aéronautique. Nous avons choisi la fonction de maintenance HMS (Health Monitoring System) qui a un besoin important en bande passante et un besoin de temps de réponse borné.Par ailleurs, cette fonction est également dotée de propriétés de parallélisme car elle traite des données de vibration venant de capteurs qui sont fonctionnellement indépendants, et par conséquent leur traitement peut être parallélisé sur plusieurs coeurs. La particularité de cette étude est qu’elle s’intéresse au déploiement d’une fonction existante séquentielle sur une architecture many-coeurs en partant de l’acquisition des données jusqu’aux calculs des indicateurs de santé avec un fort accent sur le fluxd’entrées/sorties des données. Nos travaux de recherche ont conduit à 5 contributions:• Transformation des algorithmes existants en algorithmes incrémentaux capables de traiter les données au fur et mesure qu’elles arrivent des capteurs.• Gestion du flux d’entrées des échantillons de vibrations jusqu’aux calculs des indicateurs de santé,la disponibilité des données dans le cluster interne, le moment où elles sont consommées et enfinl’estimation de la charge de calcul.• Mesures de temps pas très intrusives directement sur le MPPA-256 en ajoutant des timestamps dans le flow de données.• Architecture logicielle qui respecte les contraintes temps-réel même dans les pires cas. Elle estbasée sur une pipeline à 3 étages.• Illustration des limites de la fonction existante: nos expériences ont montré que les paramètres contextuels de l’hélicoptère tels que la vitesse du rotor doivent être corrélés aux indicateurs de santé pour réduire les fausses alertes. / Traditional single-cores are no longer sufficient to meet the growing needs of performance in avionics domain. Multi-core and many-core processors have emerged in the recent years in order to integrate several functions thanks to the resource sharing. In contrast, all multi-core and many-core processorsdo not necessarily satisfy the avionic constraints. We prefer to have more determinism than computing power because the certification of such processors depends on mastering the determinism.The aim of this thesis is to evaluate the many-core processor (MPPA-256) from Kalray in avionic context. We choose the maintenance function HMS (Health Monitoring System) which requires an important bandwidth and a response time guarantee. In addition, this function has also parallelism properties. It computes data from sensors that are functionally independent and, therefore their processing can be parallelized in several cores. This study focuses on deploying the existing sequential HMS on a many-core processor from the data acquisition to the computation of the health indicators with a strongemphasis on the input flow.Our research led to five main contributions:• Transformation of the global existing algorithms into a real-time ones which can process data as soon as they are available.• Management of the input flow of vibration samples from the sensors to the computation of the health indicators, the availability of raw vibration data in the internal cluster, when they are consumed and finally the workload estimation.• Implementing a lightweight Timing measurements directly on the MPPA-256 by adding timestamps in the data flow.• Software architecture that respects real-time constraints even in the worst cases. The software architecture is based on three pipeline stages.• Illustration of the limits of the existing function: our experiments have shown that the contextual parameters of the helicopter such as the rotor speed must be correlated with the health indicators to reduce false alarms.
|
418 |
Influência do treinamento de força sobre a estratégia de prova e o desempenho de corredores de longa distância em um teste contrarrelógio de 10 km / Influence of strength training on pacing strategy and performance in long distance runners in a 10-km running time trialMayara Vieira Damasceno 16 October 2015 (has links)
O objetivo do presente estudo foi analisar o impacto de oito semanas de um programa de treinamento de força sobre a estratégia de prova e o desempenho de corredores de longa distância durante uma prova contrarrelógio de 10 km. Antes e após a fase de intervenção com o programa de treinamento de força, dezoito corredores recreacionais divididos nos grupos treinamento (GT) (n = 9) e controle (GC) (n = 9) foram submetidos aos seguintes testes: a) antropometria e teste progressivo até a exaustão voluntária, b) teste com velocidade submáxima constante, c) simulação de uma prova de 10 km para análise da estratégia de prova, d) teste de drop jump, e) teste de wingate, f) teste de uma repetição máxima (1RM) e g) teste de tempo limite. A atividade eletromiográfica dos músculos vasto medial e bíceps femoral foi medida durante o teste de 1RM. No GT, a magnitude de melhora para o 1RM (23,0 ± 4,2%, P = 0,001), drop jump (12,7 ± 4,6%, P = 0,039), e velocidade de pico na esteira (2,9 ± 0,8%, P = 0,013) foi significativamente maior em relação ao GC. Este aumento do 1RM para o GT foi acompanhado por uma tendência a uma maior atividade eletromiográfica (P = 0,080). A magnitude de melhora no desempenho na prova de 10 km foi maior (2,5%) no GT que no GC (-0,7%, P = 0,039). O desempenho foi melhorado principalmente devido a velocidades mais elevadas durante as últimas sete voltas (últimos 2800 m) da prova de 10 km. No entanto, não houve diferenças significativas antes e após o período de treinamento para o padrão de estratégia de prova utilizada, consumo máximo de oxigênio, ponto de compensação respiratória, economia de corrida e desempenho anaeróbio para ambos os grupos (P > 0,05). Em conclusão, estes resultados sugerem que, embora um programa de treinamento de força não altere a estratégia de prova adotada, ele oferece um potente estímulo para combater a fadiga durante as últimas partes de uma corrida de 10 km, resultando em um melhor desempenho total / The aim of the present study was to analyze the impact of an 8-week strength-training program on performance and pacing strategy adopted by runners during a self-paced endurance running. Eighteen endurance runners were allocated into either strength training group (STG, n = 9) or control group (CG, n = 9) and performed the following tests before and after the training period: a) anthropometric measures and maximal incremental treadmill test, b) running speed-constant test, c) 10-km running time trial, d) drop jump test, e) 30-s Wingate anaerobic test, f) maximum dynamic strength test (1RM), g) time to exhaustion test. Electromyographic activity of vastus medialis and biceps femoris was measured during 1RM test. In the STG, the magnitude of improvement for 1RM (23.0 ± 4.2%, P = 0.001), drop jump (12.7 ± 4.6%, P = 0.039), and peak treadmill speed (2.9 ± 0.8%, P = 0.013) was significantly higher compared to CG. This increase in the 1RM for STG was accompanied by a tendency to a higher electromyographic activity (P = 0.080). The magnitude of improvement for 10-km running performance was higher (2.5%) for STG than for CG (-0.7%, P = 0.039). Performance was improved mainly due higher speeds during the last seven laps (last 2800 m) of the 10-km running. Nevertheless, there were no significant differences between before and after training period for pacing strategy, maximal oxygen uptake, respiratory compensation point, running economy, and anaerobic performance for both groups (P > 0.05). In conclusion, these findings suggest that, although a strength-training program does not alter the pacing strategy, it offers a potent stimulus to counteract fatigue during the last parts of a 10-km running, resulting in an improved overall running performance
|
419 |
Determinação do limiar de anaerobiose pela análise visual gráfica e pelo modelo matemático de regressão linear bi-segmentado de Hinkley em mulheres saudáveis / Anaerobic threshold determined by graphic visual analysis and Hinkley bi-segmental linear regression mathematical model in healthy womenHiga, Mali Naomi 17 November 2006 (has links)
O limiar de anaerobiose (LA) é definido como a intensidade de exercício físico em que a produção de energia pelo metabolismo aeróbio é suplementada pelo metabolismo anaeróbio. Este índice constitui-se de um delimitador fisiológico de grande importância para o fornecimento de informações concernentes aos principais sistemas biológicos do organismo, os quais estão envolvidos na realização de um exercício físico. O LA é um importante parâmetro de determinação da capacidade aeróbia funcional de um indivíduo. Diversos métodos são usados para estimar o LA durante exercício. Existem métodos invasivos, como a medida repetida da concentração de lactato sanguíneo; e métodos não-invasivos, por meio de análise de variáveis biológicas como medidas contínuas dos gases respiratórios, através da análise de mudança do padrão de resposta das variáveis ventilatórias e metabólicas, e também pela análise da mudança do padrão de resposta da freqüência cardíaca (FC) frente a um exercício físico incremental. O objetivo deste estudo foi comparar e correlacionar o LA determinado por métodos não-invasivos de análise visual gráfica das variáveis ventilatórias e metabólicas, considerado como padrão-ouro neste estudo, e pelo modelo matemático de regressão linear bi-segmentado utilizando o algoritmo de Hinkley, aplicado a série de dados de FC (Hinkley FC) e da produção de dióxido de carbono ( CO2) (Hinkley CO2). Metodologia: Treze mulheres jovens (24 ± 2,63 anos) e dezesseis mulheres na pós-menopausa (57 ± 4,79 anos), saudáveis e sedentárias realizaram teste ergoespirométrico continuo do tipo rampa em cicloergômetro (Quinton Corival 400), com incrementos de 10 a 20 Watts/min até a exaustão física. As variáveis ventilatórias e metabólicas foram captadas respiração a respiração (CPX-D, Medical Graphics), e a FC batimento a batimento (ECAFIX, ACTIVE-E). Os dados foram analisados por testes não paramétricos de Friedman, Mann-Whitney e correlação de Spearman. Nível de significância de ? = 5%. Resultados: Os valores das variáveis potência (W), FC (bpm), consumo de oxigênio relativo ( O2) (mL/kg/min), O2 absoluto (mL/min), CO2 (mL/min) e ventilação pulmonar ( E) (L/min) no LA não apresentaram diferenças significativas entre as metodologias (p > 0,05) nos dois grupos de mulheres estudadas. A análise de correlação dos valores de potência em W, FC em bpm, O2 em mL/kg/min, O2 em mL/min, CO2 em mL/min e E em L/min, entre o método padrão-ouro com o Hinkley CO2 foram respectivamente: rs=0,75; rs=0,57; rs=0,48; rs=0,66; rs=0,47 e rs=0,46 no grupo jovem, e rs=-0,013; rs=0,77; rs=0,88; rs=0,60; rs=0,76 e rs=0,80 no grupo pós-menopausa. Os valores de correlação do método padrão-ouro com Hinkley FC para as variáveis potência em W, FC em bpm, O2 em mL/kg/min, O2 em mL/min, CO2 em mL/min e E em L/min, obtidas no LA foram respectivamente: rs=0,58; rs=0,42; rs=0,61; rs=0,57; rs=0,33 e rs=0,39 no grupo de jovens, e rs=0,14; rs=0,87; rs=0,76; rs=0,52; rs=0,33 e rs=0,65 no grupo pós-menopausa. O grupo pós-menopausa apresentou melhores valores de correlação em relação ao grupo de jovens, exceto para as variáveis potência e consumo de oxigênio absoluto (mL/min). Este fato pode estar relacionado a uma maior taxa de variação e magnitude das variáveis analisadas em indivíduos jovens em relação aos de meia-idade, sendo, desta forma, obtida melhor adequação do modelo matemático estudado em mulheres de meia idade. Conclusão: O algoritmo matemático de Hinkley proposto para detectar a mudança no padrão de resposta da CO2 e da FC foi eficiente nos indivíduos de meia-idade, portanto, a metodologia matemática utilizada no presente estudo constitui-se de uma ferramenta promissora para detectar o LA em mulheres saudáveis, por ser um método semi-automatizado, não invasivo e objetivo na determinação do LA. / The anaerobic threshold (AT) is defined as the intensity level of physical exercise at which energy production by aerobic metabolism is supplemented by anaerobic metabolism. This index provides a physiologic delimitation of great importance to supply the organism biological systems information involved in physical exercise performance. The AT constitutes a most important determining of an individuals functional aerobic capacity. Several methods are used for estimating the AT during exercise. There are invasive methods that require repeated blood lactate accumulation; and there exist non-invasive methods by biological variables analysis, like continuous respiratory gases determination by analysis of changes in pattern respiratory and metabolic responses, and heart rate (HR) responses too. The aim of the present study was to compare AT obtained by a graphic visual method of ventilatory and metabolic variables, considered by gold standard method in the present study, with the bi-segmental linear regression mathematic model of Hinkleys algorithm applied in a HR (Hinkley HR) and carbon dioxide output ( CO2) (Hinkley CO2) data. Methodology: Thirteen young women, 24 ± 2,63 years old, and sixteen postmenopausal women, 57 ± 4,79 years old, leading healthy and sedentary life style were submitted to an incremental test in a cicloergometer electromagnetic braking (Quinton Corival 400), with 10 to 20 W/min increments up to physical exhaustion. The ventilatory variables were registered breath-to-breath (CPX-D, Medical Graphics) and HR was obtained beat-to-beat (ECAFIX, ACTIVE-E), over real time. The data were analyzed by Friedmans test and Spearmans correlation test, with a level of significance set at 5%. Results: The Power output (W), HR (bpm), oxygen uptake ( O2) (mL/kg/min), O2 (mL/min), CO2 (mL/min) and pulmonary ventilation ( E) (L/min) data in AT have showed no significant differences (p > 0,05) between methods to determine AT in both women groups. The correlation analysis of power output in W, HR in bpm, O2 in mL/kg/min, O2 in mL/min, CO2 in mL/min and E in L/min values, determined by gold standard method and by Hinkley CO2 data were respectively: rs=0,75; rs=0,57; rs=0,48; rs=0,66; rs=0,47 and rs=0,46 in young group, and rs=-0,013; rs=0,77; rs=0,88; rs=0,60; rs=0,76 and rs=0,80 in postmenopausal group. The correlation analysis by gold standard method and Hinkley FC in AT of power output in W, HR in bpm, O2 in mL/kg/min, O2 in mL/min, CO2 in mL/min and E in L/min data were respectively: rs=0,58; rs=0,42; rs=0,61; rs=0,57; rs=0,33 and rs=0,39 in young group, and rs=0,14; rs=0,87; rs=0,76; rs=0,52; rs=0,33 and rs=0,65 in postmenopausal group. The postmenopausal group presents better correlations values than young group, except in power output and O2 (mL/min) data. This may be related to more variability rate and higher kinetics responses to variables studied in young group in relation to postmenopausal group. Nevertheless, there was obtained better mathematical model adequacy in middle-age women. Conclusion: the Hinkleys mathematical algorithm proposed to detect the response patterns changes of CO2 and HR variables was efficient to detect AT in health postmenopausal womens group, therefore, the mathematical methodology used in the present study showed be a promissory tool because this method represent a semi-automatized, non invasive and objective measure of AT determination.
|
420 |
Evolving Legacy System's Features into Fine-grained Components Using Regression Test-CasesMehta, Alok 11 December 2002 (has links)
"Because many software systems used for business today are considered legacy systems, the need for software evolution techniques has never been greater. We propose a novel evolution methodology for legacy systems that integrates the concepts of features, regression testing, and Component-Based Software Engineering (CBSE). Regression test suites are untapped resources that contain important information about the features of a software system. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. The unique combination of Feature Engineering and CBSE makes it possible for a legacy system to be modernized quickly and affordably. We develop a new framework to evolve legacy software that maps the features to software components refactored from their feature implementation. In this dissertation, we make the following contributions: First, a new methodology to evolve legacy code is developed that improves the maintainability of evolved legacy systems. Second, the technique describes a clear understanding between features and functionality, and relationships among features using our feature model. Third, the methodology provides guidelines to construct feature-based reusable components using our fine-grained component model. Fourth, we bridge the complexity gap by identifying feature-based test cases and developing feature-based reusable components. We show how to reuse existing tools to aid the evolution of legacy systems rather than re-writing special purpose tools for program slicing and requirement management. We have validated our approach on the evolution of a real-world legacy system. By applying this methodology, American Financial Systems, Inc. (AFS), has successfully restructured its enterprise legacy system and reduced the costs of future maintenance. "
|
Page generated in 0.07 seconds