|September 07, 2007|
|Nutrition Research Alert|
|Emerging Technologies :: R&D Trends :: Technology Innovation :: Strategic Analysis|
MEASURING RESTING METABOLIC RATE IN
Resting metabolic rate (RMR) is an important component of total energy expenditure, contributing 65% to 75% of the total daily energy demands in adults. Measuring RMR accurately and reliably is important when assessing energy balance and disease sates, and predicted RMR is often used to estimate total daily energy requirements.
With the recent rise in childhood obesity, it is important that accurate techniques are available to measure RMR in children to investigate patterns in energy balance associated with overweight and obesity. The ability to measure RMR reliably in young children may be compromised because of the requirement that subjects sit quietly for a protracted period.
The aims of a recent study were to investigate the repeatability of measuring RMR in a sample of preschool children between 2 and 6 years of age and to explore the impact of different measurement protocols on the derived estimate of RMR. Eleven children, 4 females and 7 males, 2 to 6 years of age served as subjects. Subjects reported to 3 separate clinic visits over a two week period at which time RMR and body composition were measured. Four different methods of calculating RMR were utilized: 1) RMR average after 5 minutes = the mean of all of the measurements after discarding the first 5 minutes 2) RMR average after 10 minutes = the mean of all measurements after discarding the fist 10 minutes 3) RMR still after 5 minutes = the mean of measurements excluding the first 5 minutes and excluding measurements that included large body movements and 4) RMR still after 10 minutes = the mean of measurements excluding first 10 minutes and excluding large body movements.
Repeatability of RMR measurements was good (coefficient of variation of replicates, 6.8%), with no significant difference between days of measurement. The lowest RMR measurement was obtained when the first 10 minutes were excluded and periods during which large activity was observed were excluded. This measurement was, on average, 4% lower than averaging the measurements after the first 5 minutes, including body movements.
These findings suggest that RMR can be measured in preschool children and that the most effective method for calculating RMR in these subjects is to exclude periods when large body movements occur and the first 10 minutes of the measurement period.
D. Jackson, L. Pace, J. Speakman. The measurement of resting metabolic rate in preschool children. Obesity;15:1930-1932 (August, 2007). [Correspondence: Diane M. Jackson, Dividion of Vascular Health, Rowett Research Institute, Greenburn Road, Bucksburn, Aberdeen AB21 9SB, Scotland. E-mail: email@example.com].
The prevalence of childhood overweight has increased dramatically in the past few decades. A major reason for the current epidemic is that youths are constantly exposed to obesigenic environments in which energy-dense foods and sedentary activities are readily available while the opportunities for more healthy nutritional practices and moderate to vigorous physical activity (MVPA) are limited. Efforts to prevent overweight must change this balance by exposing youths to "fitogenic"environments, which discourage unhealthy eating and screen time while encouraging MVPA. It is important to note that high levels of adiposity are associated with low levels of cardiovascular (CV) fitness. Therefore, to some extent, the unfavorable health status of obese youths may be due to the poor CV fitness that accompanies high fatness, rather than fatness alone.
While childhood overweight affects all groups, black girls are particularly at risk of becoming overweight.
Investigators performed a study targeted at the prevention of further accretion of undesirable levels of adipose tissue in black girls through regular PA. They imparted a physical training dose of 80 minutes of MVPA, with 35 minutes at an intensity in the vigorous range as monitored with HR monitors.
Subjects were black girls 8 to 12 years of age. Subjects came to the medical center for testing at the beginning of the study and after 10 months of intervention. Pubertal stages were assessed by pediatricians. Height and weight were measured and body mass index (BMI) was calculated. Waist circumference and skinfold thicknesses were also measured. Total body composition was obtained using dual energy X-ray absorptiometry (DXA). Visceral adipose tissue (VAT) and subcutaneous abdominal tissue (SAAT) were obtained using magnetic resonance imaging. CV fitness was assessed using a multistage treadmill test. Free-living physical activity (PA) was measured using a 7-day recall.
Subjects were randomized within each school after pretesting to the intervention or control group with a ratio of 3:2. Subjects in the control group received no intervention. Subjects in the intervention group stayed at their school at the end of the day to receive a 110 minute intervention. The intervention was offered during the school year every day that school was in session. The intervention consisted of 30 minutes of homework time during which the subjects were provided with a healthy snack, and 80 minutes of PA.
Mean attendance was 54%. Compared with the control group, the intervention group had a relative decrease in % body fat (BF), BMI, and VAT and a relative increase in bone mineral density (BMD) and CV fitness. Higher attendance was associated with greater increased in BMD and greater decreased in %BF and BMI. Higher HR during PA was associated with greater increases in BMD and greater decreases in %BF.
An after-school PA program can lead to beneficial changes in body composition and CV fitness in young black girls.
P. Barbeau, M. Johnson, C. Howe, et al. Ten months of exercise improves general and visceral adiposity, bone, and fitness in black girls. Obesity;15:2077-2085 (August, 2007). [Correspondence: Paule Barbeau, Medical College of Georgia, Georgia Prevention Institute, 1499 Walton Way, HS 1755, Augusta, GA 30912. E-mail: firstname.lastname@example.org].
Prejudice toward individuals who are overweight is an attitude commonly held by Americans. A small amount of evidence suggests that anti-fat attitudes held by parents are related to behaviors toward their children. Given that feeding occurs within a social context where parents expectations and perceptions are conveyed to their children, the focus of a recent investigation is the relationship between parents attitudes about weight and their restrictive feeding practices.
Restrictive feeding practices have received a great deal of attention in the literature. Motivated by a desire to enhance their childs health or to prevent him or her from becoming overweight, parents might restrict their childs food intake in a variety of ways (for example, not allowing the child to eat certain foods, allowing only small portions of some foods). Research finds that mothers restrictive feeding practices increase with concern about their childs weight.
The current study was designed to test the hypothesis that parents who hold stronger anti-fat attitudes would report higher restriction of their childs food intake for weight reasons, even after controlling for the parents BMI (weight-to-height ratio), the childs BMI, and the parental concern about the childs weight. It was expected that restriction for health would not be related to anti-fat attitudes.
The current research includes data from the mothers of 126 four- to six-year old children (38% boys), as well as the fathers of 102 of these children. Parents concern about child overweight was assessed with 5 items. This subscale included three items from the Child Feeding Questionnaire and two additional items related to parents concerns about both the current and future weight of their child. Two restrictive feeding practices subscales were taken from the Comprehensive Feeding Practices Questionnaire. One (Restriction for Weight; RW) assesses parents; attitudes and behaviors related to restricting their childs food intake to control or maintain their childs weight. The other subscale (Restriction for Health; RH) assesses attitudes and behaviors related to restricting or limiting unhealthy foods from their childs diet. Parents attitudes toward obesity were examined using Crandalls Anti-fat Questionnaire. This instrument includes three subscales: 1) the evaluation and dislike of individuals who are fat (Dislike); 2) the controllability of weight/fat (Willpower); and 3) personal concerns and distress about weight or the prospect of becoming overweight (Fear of Fat).
Parental concern about child overweight was related to higher restrictive feeding practices for both mothers and fathers. Parents anti-fat attitudes also predicted restrictive feeding above and beyond the effects of parent and child BMI and parental concern about overweight.
These findings suggest that parents anti-fat attitudes impact the way they feed their children.
D. Musher-Eizenman, S. Holub, J. Hauser, et al. The relationship between parents anti-fat attitudes and restrictive feeding. Obesity;15:2095-2102 (August, 2007). [Correspondence: Dara R. Musher-Eizenman, Department of Psychology, Bowling Green State University, Bowling Green OH 43403. E-mail: email@example.com].
Excess weight among children and adolescents in the United States is a prevalent and serious problem. Obesity appears to be disproportionately prevalent among minorities, particularly Mexican American and non-Hispanic black children. The increasing prevalence of overweight children has been linked to parallel increases in the medical comorbidities associated with excess weight, such as type 2 diabetes, hypertension, and the metabolic syndrome.
Previous studies suggest that overweight or obese children have poorer academic outcomes. Previous studies have also demonstrated a negative association between number of absences and academic performance. The direct link between obesity in children and school attendance, however, has not been examined.
The purpose of a recent investigation was to examine the association between relative weight and absenteeism in 1069 fourth to sixth graders from nine K through Grade 8 public schools in an urban area. Participation was limited to fourth to sixth grade students in nine of ten inner-city Philadelphia schools that were part of an ongoing randomized control trial to assess prevention strategies for obesity in low socioeconomic samples. The sample included 1069 participants. Each participant was classified into one of four weight categories: underweight: BMI-for-age = 5th percentile; normal-weight = BMI-for-age 5th to 84.9th percentile; overweight = BMI-for-age 85th to 94th percentile; and obese = BMI-for-age = 95th percentile. Nearly 40% of children were overweight or obese.
Weight was measured in the second semester of the academic year. Absentee data for both semesters of the same 180 day academic year were recorded by home room teachers first thing each morning.
Analysis of variance showed that overweight children were absent significantly more than normal-weight children. Linear regression showed that the obese category remained a significant contributor to the number of days absent even after adjusting for age, race/ethnicity, and gender.
The data suggest that overweight children have a greater risk for school absenteeism, which in addition to the medical and psychosocial consequences, could negatively impact them later in life.
A. Geier, G. Foster, L. Womble, et al. The relationship between relative weight and school attendance among elementary schoolchildren. Obesity;15:2157-2161 (August, 2007). [Correspondence: Gary D. Foster, Temple University, Center for Obesity Research and Education, 3223 N. Broad St., Suite 175, Philadelphia, PA 19140. E-mail: firstname.lastname@example.org].
Obesity has become a major health problem of global significance. Several chronic illnesses are associated with obesity, such as hypertension, type 2 diabetes mellitus, cardiovascular diseases, respiratory problems, and osteoarthritis (OA). Human obesity is associated with an increased risk for knee OA. Moreover, obesity increases the risk of OA progression, and weight loss is associated with a reduction in the risk of developing symptomatic knee OA. The joint damage and chronic pain from knee OA lead to muscle atrophy, decreased mobility, poor balance, and eventually limited physical activity. In addition, obesity is a risk factor for activity limitation in patients with OA.
Cardiorespiratory capacity is recognized as an important component of health-related fitness. Adherence to a physical exercise regimen, when prescribed, is crucial in preserving physical performance and function for patients with knee OA. Generally, physical fitness and aerobic exercise capacity are low in obese individuals.
A study was performed to evaluate cardiopulmonary exercise capacity in obese subjects with and without knee OA and to clarify the relationship between this OA-related disability and quality of life. Twenty-eight obese patients with knee OA were recruited as subjects. Inclusion criteria for participation in the study were as follows 1) BMI above or equal to 28 kg/m2, 2) knee pain most days of the month, 3) fulfillment of the American College of Rheumatology criteria for symptomatic knee OA, 4) sedentary activity pattern with < 20 minutes of formal exercise once weekly for the past 6 months, and 5) sufficient cognition and communication to understand the nature and potential risks of the study. For the control group, 28 volunteers, with a BMI = kg/m2 were recruited.
Height and weight were measured. Skinfold thickness was measured at four sites: biceps, triceps, subscapular, and suprailiac. The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) was used to assess physical function. Exercise capacity was assessed with a graded exercise test using an electronically braked arm crank ergometry. A computerized gas analysis system collected and analyzed expired gases during exercise. The subject was attached to a 12-lead electrocardiograph to monitor heart rate (HR) and the electrocardiograph was displayed throughout the cardiopulmonary exercise test (CPET). The 6 minute walk test (6 MWT) was used to study physical performance. The Medical Outcomes Study Short Form 36 (SF-36) questionnaire was used to measure quality of life.
VO2peak was significantly higher in the controls when compared with the patients (mean ± standard deviation, 1.584 ± 0.23 L/kg per min vs. 0.986 ± 0.20 L/kg per min; p < 0.001). Obese subjects without knee OA walked a significantly longer distance in the 6 MWT than obese patients with knee OA (p < 0.001). We also observed significant negative correlation between Vo2max and perceived exertion, WOMAC pain and physical limitation, and bodily pain and general health domains of SF-36.
Knee OA reduced exercise and ambulatory capacity and impairs quality of life in obese individuals. This study confirms that exercise capacity and quality of life might be improved by energetic and intensive treatment of pain resulting from knee OA.
S. Sutbeyaz, N. Sezer, B. Koseoglu, et al. Influence of knee osteoarthritis on exercise capacity and quality of life in obese adults. Obesity;15:2071-2076 (August, 2007). [Correspondence: Serap Tomruk Sutbeyaz, Karakusunlar M 339.s Erkam apt, No:12/9 100.yil/Cankaya, Ankara, Turkey. E-mail: email@example.com].
The rise in obesity rates over the last three decades has been paralleled by an increase in frequency of eating out and in food portion sizes. The number of meals consumed away from home in the United States has risen from 37 meals per week in 1981 to 5 meals per week in 2000. Frequency of eating out has been associated with higher energy and fat intakes and with a higher BMI. Along with frequency of eating out, there has been a marked trend toward the availability and consumption of larger portion sizes.
Chefs play an important role in preparing and serving healthful food. While surveys have reported that chefs recognize the importance of nutrition in menu planning, many are preparing meals that are inconsistent with the US dietary guidelines.
The purposes of a recent study were to examine opinions of chefs regarding the determination of restaurant portion sizes and to identify factors that influence the portion size of restaurant foods. Investigators assessed a range of factors associated with the amount of food served in a restaurant, including plate waste, food presentation, food cost, nutritional content, competition with other restaurants, customer expectations, and characteristics of the respondents. Additional analyses examined opinions regarding portion size, nutrition information, food intake, and weight management.
The survey was developed at the Pennsylvania State University and Clemson University to examine the opinions of chefs on the portion sizes served in restaurants. The final sample consisted of 300 survey respondents--a sample of chefs attending the Research Chefs Association Annual Conference and Tradeshow and various American Culinary Federation regional meetings in the spring of 2005.
Executive chefs were identified as being primarily responsible for establishing potion sizes served in restaurants. Factors reported to have a strong influence on restaurant portion sizes included presentation of foods, food cost, and customer expectations. While 76% of chefs thought that they served "regular" portions, the actual portions of steak and pasta they reported serving were 2 to 4 times larger than serving sizes recommended by the US government. Chefs indicated that they believe that the amount of food served influences how much patrons consume and that large portions are a problem for weight control, but their opinions were mixed regarding whether it is the customers responsibility to eat an appropriate amount when served a large portion of food.
The results from this study suggest that cultural norms and economic value strongly influence the determination of restaurant portion sizes.
M. Condrasky, J. Ledikwe, J. Flood, et al. Chefs opinions of restaurant portion sizes. Obesity;15:2086-2094 (August, 2007). [Correspondence: Marge Condrasky, Department of Food Science and Human Nutrition, A216 Poole Agricultural Center, Clemson University, Clemson, SC 29634-0371. E-mail: firstname.lastname@example.org].
A recent review study suggested that a history of childhood sexual abuse (CSA), which may affect up to one third of women and one eight of men, may serve as a potential risk factor for obesity. It has been postulated that CSA may result in individuals purposefully altering their lifestyles to increase weight in an attempt to protect themselves from further abuse, in the belief that by being overweight or obese they will be less sexually attractive to any potential abuser. It has also been suggested that "comfort-eating" may be an "adaptive function" after CSA and/or that the hormonal responses to increased levels of stress and psychopathology resulting from CSA may result directly in obesity. In support of all of these possible mechanisms, CSA has been found to be associated with disordered eating, which, in turn, predisposes to obesity.
Researchers performed a community-based prospective birth cohort study involving repeated assessments of children before the disclosure of CSA, making it possible to take into account prospectively measured potential confounders of any association of CSA and BMI. Further, the study includes both men and women and the large sample size permits assessment of gender differences in the association between CSA and BMI.
The aims of the investigation were to identify the extent to which CSA is associated with BMI in young adulthood, to examine whether any associations differed between men and women, and to examine whether any associations were explained by potential confounding factors.
The Mater-University Study of Pregnancy (MUSP) and its outcomes is a longitudinal study of women, and their offspring, who received antenatal care at a major public hospital. The mothers and children have been followed-up prospectively with maternal questionnaires being administered when their children were 6 months, 5, 14, and 21 years of age. In addition, at 5, 14, and 21 years of age, detailed physical, cognitive, and developmental examinations of the children were undertaken; and at 14 and 21 years of age, the children completed health, welfare, and lifestyles questionnaires. Of the original 7223 participants, 2461 had both self-reported CSA and measured weight and height at 21 years of age.
At the 21-year follow-up, young adults were divided into three categories: those who did not experience any type of sexual abuse or unsure as to abuse; those who reported sexual abuse without intercourse or oral sex categorized as "non-penetrative" abuse; and those who reported history or oral sex or intercourse before the age of 16 years, classified as "penetrative" abuse. Researchers used the three categories of no abuse, non-penetrative abuse, and penetrative abuse as the main exposure variable. The main outcome was the young adults BMI at the 21-year follow-up. Multiple confounding factors were considered.
Of the 1273 men, 10.5% reported non-penetrative and 7.5% reported penetrative CSA before age 16 years. Of 1305 women, 20.6% reported non-penetrative and 7.9% reported penetrative CSA by age 16 years. Researchers found young womens BMI and the prevalence of overweight at age 21 were greater in those who experienced penetrative CSA. This association was robust to adjustment for a variety of potential confounders. However, there was no association between non-penetrative CSA and BMI in women and no association between either category of CSA and BMI in men. There was statistical evidence for a gender difference in the association of CSA with mean BMI at age 21 (p value for statistical interaction < 0.01 in all models).
These findings suggest that among women, penetrative CSA is associated with greater BMI and increased odds of being overweight later in life.
A. Mamun, D. Lawlor, M. OCallaghan, et al. Does childhood sexual abuse predict young adults BMI? A birth cohort study. Obesity;15:2103-2110 (August, 2007). [Correspondence: Abdullah Al Mamun, Longitudinal Studies Unity, School of Population Health, University of Queensland, Herston Rd, Herston, QLD 4006, Australia. E-mail: email@example.com].
Many epidemiological studies have suggested a protective effect of obesity on postmenopausal bone loss. Obesity in women is associated with a lower risk of osteoporosis. This phenomenon may be associated with a beneficial effect of hyperestrogenemia because obese patients are characterized by higher estrogens levels.
There are a multitude of factors, including hormones, cytokines, and growth and inflammatory factors, responsible for the regulation of bone metabolism. Since 1997, there has been a new local player in bone metabolism that influences osteoclast formation and activity: osteoprotegerin (OPG). It cooperates with the receptor activator of the NF-k-B ligand (osteoclast differentiation factor) by binding to it, thus inhibiting osteoclast differentiation and function.
Researchers know from a preliminary investigation that serum OPG concentration is lower in the obese than in normal-weight subjects. Studying the serum level of OPG after weight reduction in obese perimenopausal women is of interest.
Researchers set out to assess the influence of weight reduction therapy on serum OPG concentration in obese patients. Additionally, the study aimed to answer the question of whether OPG could be a protective factor for bone loss in obese perimenopausal women. Finally, they aimed to evaluate the relationship between serum OPG level and both markers of bone turnover and calciotropic hormones after weight loss.
Forty-three obese women were studied. The control group consisted of 19 normal-weight women. In all patients, serum concentrations of OPG, C telopeptide of type I collagen containing the crosslinking site (CTX), osteocalcin, parathormone, 25-(OH)-D3 (vitamin D), and total calcium and phosphorus were assessed before and after a 3 month weight reduction therapy.
In obesity subjects, serum concentrations of OPG, vitamin D, osteocalcin, total calcium, and phosphorus were significantly lower, and serum concentration of parthromone was significantly higher, before weight reduction therapy in comparison with normal-weight controls. After weight reduction, a significantly higher serum concentration of vitamin D and CTX and significantly lower concentration of OPG were found.
Because serum concentration of OPG was lower in obese patients and decreased further with weight reduction therapy, OPG cannot be treated as a protective factor from bone loss in obese patients.
M. Holecki, B. Zahorska-Markiewicz, J. Janowska, et al. The influence of weight loss on serum osteoprotegerin concentration in obese perimenopausal women. Obesity;15:1925-1929 (August, 2007). [Correspondence: Michal Holecki, Department of Pathophysiology, Medical University of Silesia in Katowice, ul. Medykow 18, 40-752 Katowice, Poland. E-mail: firstname.lastname@example.org].
ONE HUNDREDPOUND WEIGHT LOSSES WITH AN INTENSIVE BEHAVIORAL PROGRAM: CHANGES IN RISK FACTORS IN 118 PATIENTS WITH LONG-TERM FOLLOW-UP
Approximately 5% of US adults have severe obesity, and the prevalence is increasing twice as fast as that of less severe obesity. Severe obesity--defined by a BMI in kg/m2 of > 40--is also termed morbid, extreme, or class 3 obesity. This extreme degree of obesity is accompanied by a significantly higher prevalence of common comorbid conditions and rates of premature mortality twice those seen with less severe degrees of overweight. Furthermore, self-esteem and physical function are assessed to be significantly lower in severely obese than in less obese persons. The present study was an observational study with the objectives of determining the benefits and risks of a weight loss of 100 pounds achieved by following an intensive behavioral program and of assessing long-term maintenance of weight loss. Over a 9 year period, the researchers prospectively identified patients who lost > 100 pounds (45.5 kg) in their program; in this paper they report the outcomes of weight loss, risk factor changes, medication use, side effects, and long-term maintenance of weight loss.
Over a 9 year period, the researchers prospectively identified patients who lost > 100 pounds (45.5 kg) and actively recorded follow-up weights. Charts were systematically reviewed to assess outcome measures and side effects. The intervention included meal replacements (shakes and entrées), low-energy diets, weekly classes, and training in record keeping and physical activity. Assessments included weekly weights, laboratory studies, medication use, lifestyle behaviors, side effects, and follow-up weights. Sixty-three men and 55 women lost > 100 pounds. At baseline, the subjects' average weight was 160 kg, 97% had > 1 obesity-related comorbidity, and 74% were taking medications for comorbidities. Weight losses averaged 61 kg in 44 weeks. Medications were discontinued in 66% of patients with a cost savings of $100/mo. Despite medication discontinuation, significant decreases in low density lipoprotein (LDL) cholesterol (20%), triacylglycerol (36%), glucose (17%), and systolic (13%) and diastolic (15%) blood pressure values were seen. Side effects were mild, and only 2 patients had severe or serious adverse events. At an average of 5 years of follow-up, patients were maintaining an average weight loss of 30 kg.
Severe obesity is accompanied by many cardiovascular risk factors, including dyslipidemia (in 75% of the patients), hypertension (in 73%), type 2 diabetes mellitus (in 18%), impaired fasting glucose (in 28%), degenerative joint disease (in 40%), and sleep apnea (in 22%). Impressive improvements in all risk factors were seen. Despite discontinuation of all medications in 66% of patients, substantial improvements in lipoprotein, glucose, and blood pressure values occurred. Although it is difficult to estimate, maintenance of a high percentage of these risk factor reductions probably would reduce coronary heart disease risk by > 50%.
Long-term maintenance of weight loss remains a major challenge. Greater initial weight losses are associated with greater maintenance of weight loss than are smaller initial weight losses. Procedures that enhance maintenance of weight loss are regular physical activity, low fat intake, generous consumption of vegetables and fruit, regular use of meal replacements, self-monitoring, and ongoing treatment or coaching. Most of the patients reported here participated in active treatment and weekly behavioral education sessions for > 18 mo during weight loss and initial maintenance activities. Most patients also returned for retreatment during the 5 year follow-up. Moreover, many continued to use shakes and entrées for indefinite periods of time. These lifestyle behaviors undoubtedly contributed to their success in weight maintenance.
In conclusion, an intensive, medically supervised, behavioral weight-management intervention using meal replacements effectively enabled certain severely obese persons to lose > 45.5 kg and to maintain approximately one-half of that weight loss for 5 years. Follow-up weights were available for 81% of the patients at > 2 years. At 2, 3, 4, and 5 years, patients were maintaining weight losses of 38, 36, 27, and 30 kg, respectively.
J. Anderson, S. Conley, A. Nicholas. One hundred pound weight losses with an intensive behavioral program: changes in risk factors in 118 patients with long-term follow-up. AJCN;86:301-307 (August 2007). [Correspondence: JW Anderson, Room 524, Medical Science Building, University of Kentucky, Lexington, KY 40502-0298. E-mail: email@example.com].
METHYLPHENIDATE REDUCES ENERGY INTAKE AND DIETARY FAT INTAKE IN ADULTS: A MECHANISM OF REDUCED REINFORCING VALUE OF FOOD?
Food is very reinforcing, and there are individual differences in the reinforcing value of food whereby obese persons find food more reinforcing than do nonobese persons and show a stronger preference for highly palatable foods with a high-fat, high-carbohydrate content. Given food reinforcement predicts food intake, individual differences in the reinforcing value of food may play a role in the development of positive energy balance leading to obesity. Dopamine was implicated in mediating the reinforcing value of food, which was noted to be a strong determinant of excessive food intake and obesity. Animal and human data suggest that low availability of circulating dopamine caused by rapid dopamine transport or reduced brain dopamine signaling may be related to the development of obesity. Ingesting palatable food, especially those high in sugar and simple carbohydrates, stimulates the release of dopamine in the accumbens shell and results in repeated self-administrative behavior typically observed in drugs of abuse. Thus, raising brain dopamine concentrations should reduce the reinforcing value of food and the motivation to eat, making it easier to reduce energy intake essential for weight loss. Moreover, given that polymorphisms associated with reduced brain dopamine are associated with increased craving and overconsumption of high-fat, high-carbohydrate foods, increasing synaptic dopamine may alter macronutrient preference, in addition to reducing caloric intake.
Several theoretical and clinical reasons are available to consider methylphenidate (MPH) as an agent to increase brain dopamine. MPH is a dopamine transport or reuptake inhibitor currently indicated for the treatment of childhood and adult attention-deficit hyperactivity disorder. Moreover, a common side effect is reduced hunger and weight loss in children and adults, especially in those with highest BMI, in kg/m2, at baseline.
The purposes of this study were 1) to examine the effects of short-acting MPH on hunger, caloric intake, satiety, and macronutrient consumption during a buffet lunch and 2) to evaluate the effects of MPH on food reinforcement. The researchers also evaluated the relation between changes in the reinforcing value of high-fat, high-carbohydrate foods (dessert or snack food) and changes in energy intake of these high-fat, high-carbohydrate foods to elucidate whether food reinforcement may be a mechanism by which MPH reduces energy intake and macronutrient preference.
Fourteen adults were given placebo or short-acting MPH (0.5 mg/kg) in a randomized, double-blind, placebo-controlled crossover fashion. One hour after ingestion, hunger and the relative reinforcing value of snack food were measured, followed immediately by energy intake and macronutrient preference during a buffet-style lunch. MPH reduced energy intake by 11% as well as intake of fat by 17% relative to placebo. Despite similar levels of prebuffet hunger, subjects taking MPH reduced their energy and fat intakes more than did those taking placebo, which suggests that hunger may not mediate the effects of MPH on energy intake. MPH showed a trend toward reducing the reinforcing value of high-fat food relative to placebo, but reduced food reinforcement was not significantly correlated with energy intake.
Based on a randomized, double-blind, placebo-controlled crossover design, which yields the highest quality of evidence in human laboratory studies, these data show that MPH, a drug that increases brain dopamine by blocking dopamine transporter occupancy, significantly reduced overall food intake from a mixed-meal buffet style lunch relative to placebo. Interestingly, MPH selectively reduced consumption of high-fat foods but had no significant effects on carbohydrate or protein intake. Moreover, no differences in side effects or vital signs between MPH and placebo were reported, suggesting MPH is well tolerated during acute administration, which is often a period when people experience the most severe side effects from psychotropic medications.
The data shows that MPH did in fact selectively reduce, relative to placebo, intake of dietary fat by 17% but had no significant effects on carbohydrate or protein intake during the buffet lunch. Interestingly, MPH also produced a corresponding reduction in the reinforcing value of high-fat snack food by 29%, which, although only a nonsignificant trend rather than significant difference, may be clinically significant given food reinforcement was shown to be a determinant of food intake in previous research. This is the first study to document that MPH has an impact on macronutrient preference. Although the role that dietary fat intake plays in the cause and nutritional intervention of obesity has come under scrutiny with the proliferation of low-carbohydrate diets, nutritional guidelines from the American Dietetic Association, derived from evidence-based systematic reviews with the use of meta-analysis, recommend a low-fat (< 30%) diet for the treatment and prevention of obesity and its comorbidities. Moreover, low-fat diets were shown to reverse heart disease processes, providing evidence that adherence to low-fat diets is critical to attenuate obesity-related comorbidities. It is important to note that the current findings were obtained in samples predominantly comprising nonobese adults, and future research is needed to determine whether MPH selectively reduces fat intake in obese persons.
As secondary objectives, the researchers explored the degree to which the hypophagic effects of MPH were due to mechanisms of reduced hunger, reduced food reinforcement, or both. Hunger increased significantly after administration of both MPH and placebo, which is not surprising because the length of time from the previous meal increased in both conditions. Although MPH attenuated the increase in hunger relative to placebo, no differences between drug conditions emerged after administration and before the buffet meal, yet subjects in the MPH group ate less than those in the placebo group. This finding, combined with a nonsignificant correlation between hunger and energy intake, indicates that changes in hunger did not mediate the hypophagic response to MPH in the sample obtained. In exploring reduced food reinforcement as a potential mechanism linking MPH with reduced energy intake, the researchers found that MPH showed a nonsignificant trend toward reduced food reinforcement relative to placebo, and snack food reinforcement was only moderately, and not significantly, correlated with fat and energy intake. Future research using larger samples that measures food intake during a 24- or 48-hour period are needed to adequately examine the extent to which changes in hunger or food reinforcement mediate reduction in energy intake and macronutrient preference resulting from MPH. Although not significant in this study, future research should also investigate the relative importance of satiety and between-meal snacking as potential mechanisms of action of the hypophagic effects of MPH.
In summary, these data show that administering a single dose of short-acting MPH at 0.5 mg/kg produced a significant reduction in energy intake, with selective reduction in macronutrient preference for high-fat foods. The current study did not support the hypothesis that reductions in hunger or food reinforcement mediate the reductions in energy intake and macronutrient consumption resulting from MPH administration, possibly because the study was not primarily designed to detect these potential mechanisms of action. Although the anorexic effects of MPH are well documented, it is of theoretical importance that future research using larger samples investigate altered food reinforcement as a potential mechanism of action of MPH given that MPH increases brain dopamine, food is a primary reinforcer, and dopamine transport and synthesis mediate the reinforcing value of food. The main findings of reduced energy intake and dietary fat consumption from MPH are consistent with the neurobiological "reward deficiency" hypothesis that the dopaminergic system plays an important role in regulating the consumption of appetitive behaviors such as eating and smoking. Consistent with the reward deficiency model of obesity, these data suggest that increasing brain dopamine results in reduced energy intake, whereas other studies indicate that decreasing brain dopamine through administration of antipsychotic mediations results in overeating and weight gain. As such, these data along with previous laboratory data of MPH on eating in obese men suggest that MPH and possibly other agents that increase brain dopamine by blocking its reuptake and synthesis warrant further study as methods of inhibiting eating behavior. With regard to future research on MPH, identifying optimal dose and timing of dose on energy intake, as well as examining the safety, tolerability, and possible abuse potential of sustained use of MPH, in obese men and women is needed before it can be considered for testing as a pharmacologic agent in the treatment of obesity.
G. Goldfield, C. Lorello, E. Doucet. Methylphenidate reduces energy intake and dietary fat intake in adults: a mechanism of reduced reinforcing value of food? AJCN;86:308-315 (August 2007). [Correspondence: GS Goldfield, CHEO Research Institute, Mental Health Research, 401 Smyth Road, Ottawa, ON Canada K1H 8L1. E-mail: firstname.lastname@example.org].
ACUTE EFFECTS OF VARIOUS FAST-FOOD MEALS ON VASCULAR FUNCTION AND CARDIOVASCULAR DISEASE RISK MARKERS: THE HAMBURG BURGER TRIAL
Worldwide consumption of fast food is steadily increasing. In the United States alone, sales of fast food reached a value of $161 billion in 2004, of which burgers accounted for 53%. Today in the United States, up to 37% of adults and up to 42% of children regularly consume fast food. These adults and children have higher intakes of energy, fat, saturated fat, sodium, and carbonated soft drinks and lower intakes of vitamins A and C, milk, fruit, and vegetables than do adults and children who do not consume fast food regularly. Epidemiologic, clinical, and in vitro evidence indicates that frequent consumption of fast food may have an unfavorable effect on the cardiovascular disease (CVD) risk profile in general, because fast food has been shown to promote weight gain and insulin resistance. Several studies suggested that these adverse long-term effects are already caused by the acute impairment of endothelial function and the increase in oxidative stress observed after the intake of high-fat meals that are low in vitamins. Endothelial dysfunction and oxidative stress are considered important factors in the development and progression of atherosclerosis. The presence of endothelial dysfunction and oxidative stress has repeatedly been shown to be associated with a greater risk of future cardiovascular events and death. In contrast to fast food, some diets, for example, the vegetarian or the Mediterranean diet, that contain high amounts of vitamins and unsaturated fat may have protective effects. For example, the administration of high doses of antioxidant vitamins such as vitamin C and E prevented diet-induced impairment of endothelial function. In response to growing public awareness, several large fast-food chains began to adapt their supply to a new demand for healthier fast food. Vegetarian burgers and vitamin-rich side orders and fruit juices are increasingly offered as a "healthy alternative" or add-on to conventional beef burger meals and are promoted as having supposedly less negative effects on the vascular risk profile.
Plausible as they are, these claims have never been investigated for complete meals as they are sold and consumed by the public. Therefore, the present study was set up to address 3 questions. First, does commercially sold fast food have an effect on vascular function and CVD risk markers? Second, does it make a difference, with respect to acute effects on vascular function and CVD risk markers, to consume a conventional (beef burger) or a vegetarian fast-food meal? Third, what is the effect of presumed vitamin- and antioxidant-rich side orders on vascular function CVD risk markers?
In a crossover study, flow-mediated endothelium-dependent dilatation (FMD) and CVD risk markers were investigated in 24 healthy volunteers before and 2 hours and 4 hours after 3 fast-food meals: a conventional beef burger with French fries, ketchup, and carbonated lemon-flavored soda (meal 1); a vegetarian burger with French fries, ketchup, and carbonated lemon-flavored soda (meal 2); and a vegetarian burger with salad, fruit, yogurt, and orange juice (meal 3).
FMD decreased after all 3 fast-food meals: the values were 9.7 ± 2.5%, 7.5 ± 3.5%, and 6.2 ± 3.3% for meal 1; 9.2 ± 3.4%, 7.1 ± 3.4%, and 6.3 ± 4.0% for meal 2; and 8.8 ± 3.3%, 6.2 ± 4.0%, and 6.8 ± 4.3% for meal 3 at baseline, 2 h, and 4 h, respectively. There were significant intraindividual differences for time but not for type of meal. A postprandial increase in baseline diameter of the brachial artery was significant for time but not for type of meal.
This study is the first clinical trial simultaneously comparing the acute effects of one of the world's most frequently consumed fast-food meals and vegetarian alternatives (with and without vitamin enrichment) on endothelial function, oxidative stress, and a wide array of other vascular markers. The most important findings are that researchers observed a decline in FMD after all 3 investigated fast-food meals and that this decline did not differ significantly between the meals. This result is somewhat unexpected, given numerous previous reports that various dietary components or vitamin supplements could prevent acute diet-induced changes in vascular function. Lack of sensitivity of the FMD determination is an unlikely explanation of this negative finding, because the modest decline in FMD observed after all 3 meals reached significance. The changes in FMD can most likely be attributed to diet effects, because, in the few studies conducted to examine circadian rhythms, the circadian changes in FMD were rather moderate and not significant. Moreover, in studies with designs similar to the design of the present study, some meals induced changes in FMD, but others did not. If circadian rhythm were a significant confounder, it would have blunted results in all of the studies.
Therefore, other mechanisms, such as the small but significant postprandial increase in baseline diameter, must be considered, whereas the absolute vasodilation after cuff occlusion remained rather constant. As a net effect, the postprandial increase may have translated into the apparent decline in FMD. It should be noted, however, that the postprandial change in baseline diameter was small in the present study, which makes it rather unlikely that the postprandial decline in FMD can be attributed solely to an increase in baseline diameter. A postprandial attenuation of vascular function has frequently been attributed to dietary triacylglycerols. The observation that triacylglycerols increased not only after meal 1 and meal 2 but also after meal 3 can most likely be attributed to the fact that carbohydrates from fruit and fruit juices can also increase triacylglycerols. Nevertheless, postprandial changes in triacylglycerols did not correlate with changes in baseline diameter or FMD.
Another frequently considered explanation of the postprandial decline in FMD is diet-induced oxidative stress. Evidence for diet-induced oxidative stress is usually obtained rather indirectly through the attenuation of diet-induced decline in FMD by supplementation of antioxidants such as vitamin C and E. In contrast to pharmacologic (high) doses of vitamins, which were often given intravenously in positive studies, a vitamin-rich side order of mixed salad, orange juice (containing > 200 mg vitamin C), and yogurt could not prevent impairment of FMD. Possibly, a single vitamin-rich meal is not sufficient to reduce oxidative stress.
Against common expectations, a conventional beef burger meal and presumably healthier alternatives such as vegetarian burgers with or without vitamin-rich side orders did not differ significantly in their acute effects on vascular reactivity. The frequently reported postprandial decline in FMD may be attributed at least in part to an increase in the baseline diameter.
T. Rudolph, K. Ruempler, E. Schwedhelm, et al. Acute effects of various fast-food meals on vascular function and cardiovascular disease risk markers: the Hamburg Burger Trial. AJCN;86:334-340 (August 2007). [Correspondence: TK Rudolph, Department of Cardiology and Angiology, Heart Center, University Hospital Hamburg-Eppendorf, Martinistrasse 52, 20246 Hamburg, Germany. E-mail: email@example.com].
INTAKE OF PHENOL-RICH VIRGIN OLIVE OIL IMPROVES THE POSTPRANDIAL PROTHROMBOTIC PROFILE IN HYPERCHOLESTEROLEMIC PATIENTS
Endothelial dysfunction is one of the first steps in the development of arteriosclerosis, and it is characterized by a thrombogenic state caused by an imbalance between procoagulant and profibrinolytic activity. Among the procoagulant factors, plasminogen activator inhibitor-1 (PAI-1) and factor VII (FVII) concentrations have been linked to coronary heart disease (CHD), and both can be regulated, at least partly, by alimentary lipemia. Attention is currently focusing on investigating whether different components of the diet can regulate acute postprandial changes in coagulation and fibrinolysis. Factor VII coagulant (FVIIc) has been linked to postprandial plasma triacylglycerol concentrations, which suggests an acute effect of triacylglycerol-rich lipoproteins on the activity of FVII (FVIIa). The researchers of the present paper previously reported that a Mediterranean diet reduces fasting plasma concentrations of FVIIa in healthy men, a fact that might be related to the presence of olive oil in this diet.
On the other hand, it has been suggested that PAI-1 activity declines after the intake of meals rich in oleic acid as part of a Mediterranean-type diet, in both the postprandial and fasting states. Virgin olive oil, which is the principal fat in this dietary pattern, contains both oleic acid and a wide range of micronutrients, among which phenolic compounds have displayed anti-thrombotic effects in cell culture and in vitro studies. Studies in humans, however, are scarce, and more evidence on these biological activities is needed. To further investigate whether postprandial phenols from virgin olive oil affect hemostasis, the researchers tested whether 2 breakfasts rich in this oil, but with different contents of phenolic compounds, had different effects on hemostasis postprandially.
Twenty-one hypercholesterolemic volunteers received 2 breakfasts rich in olive oils with different phenolic contents (80 ppm or 400 ppm) according to a randomized, sequential crossover design. Plasma concentrations of lipid fractions, factor VII antigen (FVIIag), activated factor VII (FVIIa), and plasminogen activator inhibitor-1 (PAI-1) activity were measured at baseline and postprandially.
Concentrations of FVIIa increased less and plasma PAI-1 activity decreased more 2 h after the high-phenol meal than after the low-phenol meal. FVIIa concentrations 120 min after intake of the olive oil with a high phenol content correlated positively with fasting plasma triacylglycerols, area under the curve (AUC) of triacylglycerols, and AUC of nonesterified fatty acids and negatively with hydroxytyrosol plasma concentrations at 60 min and fasting high density lipoprotein (HDL)-cholesterol concentrations. PAI-1 positively correlated with homeostasis model assessment of insulin resistance and fasting triacylglycerols and inversely with adiponectin. In a multivariate analysis, the AUCs of nonesterified fatty acids and adiponectin were the strongest predictors of plasma FVIIa and PAI-1, respectively.
The current study showed that, in patients with hypercholesterolemia, the consumption of a breakfast containing virgin olive oil with a high content of phenols induces a smaller postprandial increase in the concentration of FVIIa and a greater decrease in PAI-1 plasma activity than the same olive oil with a low content of phenols. It has been suggested that the postprandial hypertriglyceridemia that follows the intake of high-fat meals activates FVII. The intrinsic mechanism of this activation is not clear, although it is known that some of the reactions for activation of hemostatic factors are due to exposure to lipid bilayers with negative charges, such as those of denuded endothelium or the surface of platelets or oxidized LDL. The present study showed a significant association between the incremental AUC of nonesterified fatty acids and postprandial changes in FVIIa. The hydrolysis of triacylglycerol-rich lipoproteins by lipoprotein lipase may be an important source of elevated concentrations of fatty acid anions near the endothelium. These fatty acids are substrates for the lipoperoxidation produced by the increase in oxidative stress during the postprandial period. Olive oil phenolic compounds have been shown to act as chain-breaking antioxidants for the autocatalytic chain reaction of fatty acid peroxidation.
In the present study, the researchers found a significant decrease in postprandial concentrations of FVIIag, but no significant differences in the effects of these 2 types of olive oil. At the same time, there was a smaller increase in FVIIa after the ingestion of an olive oil with a high content of phenols than after the olive oil with a low content of these compounds. Because the sole difference in the composition of these 2 oils was their phenolic content, these data suggest that the effect of the diet on the decrease in FVIIag is due to the difference in their fat content, and the effect on FVII activation to their content of phenols.
The results of the present study may partly explain earlier contradictory results of studies that tested the effects of olive oil on hemostasis. It is possible that the concentrations of microcomponents of the olive oil used in some of those studies did not reach the levels needed to activate the antithrombotic properties of olive oil. The findings of this study provide new evidence of the healthy effects of virgin olive oil. In conclusion, the consumption of a breakfast containing olive oil rich in phenolic compounds may improve the thrombogenic postprandial profile of FVIIa and PAI-1 concentrations associated with acute fat intake.
J. Ruano, J. Lopez-Miranda, R. de la Torre, et al. Intake of phenol-rich virgin olive oil improves the postprandial prothrombotic profile in hypercholesterolemic patients. AJCN;86:341-346 (August 2007) [Correspondence: F Pérez-Jiménez, Lipids and Atherosclerosis Research Unit, Reina Sofia University Hospital, Avenue Menéndez Pidal, s/n, 14004 Cordoba, Spain. E-mail: firstname.lastname@example.org].
CONSUMPTION OF FAT-FREE FLUID MILK AFTER RESISTANCE EXERCISE PROMOTES GREATER LEAN MASS ACCRETION THAN DOES CONSUMPTION OF SOY OR CARBOHYDRATE IN YOUNG; NOVICE; MALE WEIGHTLIFTERS
The accretion of muscle protein as a result of resistance exercise occurs because of successive periods of positive muscle protein balance. Periods of positive protein balance are due to a synergistic interaction of an exercise and feeding-induced stimulation of muscle protein synthesis (MPS). Protein ingestion provides essential amino acids for protein synthesis, which also act, in the case of leucine, to stimulate the translational machinery. Protein ingestion also increases systemic insulin, which has a mild stimulatory effect on MPS. Rather than a dose-dependent stimulation, a minimal threshold of insulin is required to allow MPS to proceed unabated; however, further stimulation of MPS is not seen at higher doses.
Consumption of whey protein has been shown to induce a rapid aminoacidemia with a greater amplitude than casein protein. Whey protein consumption results in a rapid stimulation of whole-body protein synthesis and also oxidation, whereas casein results in a suppression of whole-body proteolysis. Casein ingestion induces a more positive whole-body leucine balance than whey. Recent findings showed that milk proteins appear superior to, or at least equivalent to, either isolated whey or casein alone in terms of supporting postprandial dietary nitrogen utilization. In addition, ingestion of intrinsically labeled soy and milk proteins resulted in greater incorporation of nitrogen into serum proteins and urea when soy was ingested. The suggestion from these results, and from a modeling approach, was that proteins from soy are directed toward splanchnic metabolism, whereas milk proteins are directed to peripheral sites.
The researchers of the present paper have recently shown that when milk and soy proteins are ingested after resistance exercise, milk protein resulted in a more positive net amino acid balance and a greater postexercise stimulation of protein synthesis. The acute findings that were observed should result, over the longer term, in a greater increase in muscle protein accrual with regular milk than with soy protein consumption. The researchers tested this hypothesis by having subjects consume either fluid milk or soy protein, with inclusion of an energy-matched (carbohydrate only) control group, immediately after exercise to promote muscle hypertrophy. Subjects also consumed a second drink 1 hour after having worked out because 2 separate periods of hyperaminoacidemia only 1 hour apart can still promote a full anabolic response. Their hypotheses were that both protein-consuming groups would have greater protein accretion with resistance training than an energy-matched carbohydrate-consuming group, but that the milk-consuming group would have greater muscle protein accretion than both the soy- and carbohydrate-consuming groups.
The researchers recruited 56 healthy young men who trained 5 d/wk for 12 wk on a rotating split-body resistance exercise program in a parallel 3-group longitudinal design. Subjects were randomly assigned to consume drinks immediately and again 1 hour after exercise: fat-free milk (milk; n = 18); fat-free soy protein (soy; n = 19) that was isoenergetic, isonitrogenous, and macronutrient ratio matched to milk; or maltodextrin that was isoenergetic with milk and soy (control group; n = 19).
Muscle fiber size, maximal strength, and body composition by DXA were measured before and after training. No between-group differences were seen in strength. Type II muscle fiber area increased in all groups with training, but with greater increases in the milk group than in both the soy and control groups. Type I muscle fiber area increased after training only in the milk and soy groups, with the increase in the milk group being greater than that in the control group. DXA-measured fat- and bone-free mass (FBFM) increased in all groups, with a greater increase in the milk group than in both the soy and control groups.
The present data shows that the chronic consumption of fluid skim milk (500 mL) immediately and 1 hour after resistance exercise promoted greater gains in FBFM (that is, lean mass) than consumption of either an isonitrogenous, isoenergetic, and macronutrient ratio-matched soy protein-containing beverage or an isoenergetic carbohydrate drink. The researchers also observed a greater reduction in BFM associated with chronic postexercise milk consumption. Previously, they reported acute data that showed fluid milk consumption resulted in greater net amino acid uptake and fractional protein synthesis than did the same soy protein-containing drink used in the present study. In addition, consumption of whole or fat-free milk has also been shown to give rise to a positive muscle protein balance. They hypothesized that their acute findings would translate into greater lean mass gains with chronic milk consumption than with soy protein consumption. As proof-of-principle, the present study confirms that their acute findings were at least qualitatively predictive of a greater lean mass gain with long-term training.
In conclusion, immediate and 1-h postexercise milk consumption, as opposed to soy or isoenergetic carbohydrate, resulted in greater gains in FBFM and type II muscle fiber area. Increases in type I muscle fiber area were greater in the milk and the soy groups than in the control group. All groups showed increased strength as a result of the training program; however, there were no between-group effects. A greater fat mass loss was seen in subjects who consumed the postexercise milk supplement than in both the soy and control groups, which may be related to dietary calcium intake or an endogenous property of the milk proteins themselves.
J. Hartman, J. Tang, S. Wilkinson, et al. Consumption of fat-free fluid milk after resistance exercise promotes greater lean mass accretion than does consumption of soy or carbohydrate in young, novice, male weightlifters. AJCN;86:373-381 (August 2007). [Correspondence: SM Phillips, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4K1, Canada. E-mail: email@example.com].
Energy intake is necessary for body function, yet balancing energy intake and total energy expenditure (TEE) to avoid inappropriate weight gain is difficult for many individuals. In 2002, the Institute of Medicine Food and Nutrition Board published the dietary reference intake (DRI) estimated energy requirements (EER), which is the energy intake needed to maintain energy balance for healthy adults by sex, age, weight, height, and physical activity level. These DRI estimates were calculated with equations that were based on a pooled analysis of doubly labeled water (DLW) studies, with a total of 408 normal-weight and 360 overweight and obese persons. One of the research recommendations in the report called for further data, in particular, more data on persons aged 40 to 60 years. The DLW method is a noninvasive technique for measuring TEE in free-living individuals over a 1 to 2 week period. Because of the expense and limited availability of DLW, however, few studies have measured TEE in a large cohort. The Health, Aging, and Body Composition study measured TEE with the use of DLW and compared the results with values predicted by the DRI equations in 288 black and white men and women aged 70 to 79 years. In that population, the DRI equation was found to be accurate when compared with TEE from DLW, with a mean difference of 0 ± 14%. The Observing Protein and Energy Nutrition (OPEN) Study obtained estimates of TEE with the use of DLW in 484 men and women aged 40 to 69 years. The present article reports energy expenditure from the OPEN Study, compares TEE by sex and personal characteristics, and evaluates the DRI EER equation for this population.
TEE was measured by the DLW method in 450 men and women aged 40 to 69 years from the Observing Protein and Energy Nutrition Study. Resting metabolic rate was estimated by use of the Mifflin equation. Unadjusted TEE was lower in women than in men (591 kcal/d); however, when the analysis was adjusted for fat-free mass (FFM), women had significantly higher TEE than did men (182 kcal/d). This difference appeared to be due to higher physical activity levels in women (physical activity energy expenditure [PAEE] adjusted for FFM was 188 kcal/d greater in women than in men). Mean TEE was lowest in the seventh decade. TEE from DLW was highly correlated with EER from the DRI equations.
In this large DLW study of middle-aged men and women, the average TEE for women was lower than that for men (by 591 kcal/d). The researchers considered differences in body composition and menopausal status as possible explanations for this difference and concluded that the difference was primarily due to differences in FFM and estimated PAEE between men and women. The magnitude of the difference in TEE diminished with adjustment for height and weight, but TEE remained significantly higher in men (by 236 kcal/d). However, after adjustment for FFM, the subject characteristic that most highly correlates with TEE, the direction of the differences changed, and TEE became significantly higher in women (by 182 kcal/d). Furthermore, they did not find any significant differences in TEE by menopausal status in women. TEE was significantly higher in women than in men (by 148 kcal/d) when adjusted for menopausal status, FFM, and age.
The difference in TEE between men and women appears to be due to greater estimated PAEE for women, which showed the same trend as TEE. The unadjusted average estimated PAEE was higher in men than in women (143 kcal/d); this difference was still significant, but smaller (61 kcal/d), when adjusted for height and weight, and reversed when adjusted for FFM (women 188 kcal/d greater than men). Similarly, estimated PAL values were higher for women than for men. When TEE was adjusted for estimated PAL in addition to menopausal status, FFM, and age, the difference in TEE between men and women was no longer statistically significant, which indicates that differences between men and women were due to higher estimated PAL in the women.
These results corroborate the DRI equations for total energy intake. Although the researchers found the equation to slightly overestimate TEE for this age group, particularly for clinically obese women, TEE from DLW and the DRI equation were highly correlated. In addition, the relation between TEE from DLW and TEE from the DRI equation did not differ from the line of identity. According to the results of this analysis, the estimated EER for clinically obese women appears to be too high by approximately 10%. Dietitians and others advising clinically obese women on weight loss should be aware of this potential limitation. It is also important to note that this analysis was intended to validate the DRI equation by using an independent DLW sample. The equation was validated in terms of its intended use to estimate the EER of a healthy adult of a specified age, sex, weight, height, and physical activity level. The researchers of the present study did not validate the equation for use in nutritional epidemiology or surveillance studies. Furthermore, it is unclear how one uses the equation without a good estimate of PAL, which limits its general use.
This large DLW study supports the use of the DRI equation for EER in middle-aged adults. The researchers also found that energy expenditure was lower in the seventh decade of life, which could have been due to changes in body composition in women. Furthermore, they found that when TEE was adjusted for FFM, TEE in middle-aged women was not lower but rather higher than in men, which appeared to be due to higher levels of physical activity.
J. Tooze, D. Schoeller, A. Subar, et al. Total daily energy expenditure among middle-aged men and women: the OPEN Study. AJCN;86:382-387 (August 2007). [ Correspondence: JA Tooze, Department of Biostatistical Sciences, Wake Forest University School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157. E-mail: firstname.lastname@example.org].
HIGH FOLATE INTAKE IS ASSOCIATED WITH LOWER BREAST CANCER INCIDENCE IN POSTMENOPAUSAL WOMEN IN THE MALMÖ DIET AND CANCER COHORT
Folate can be found in high concentrations in dark-green leafy vegetables, legumes, fruit, and liver. Folate is a coenzyme that carries one-carbon units and is thereby of great importance in the metabolism of amino acids and nucleotides. Two main mechanisms link folate deficiency to cancer development: a reduced synthesis of S-adenosyl methionine (SAM), which results in aberrations in DNA methylation, and a reduced synthesis of the pyrimidine thymidylate, which results in the misincorporation of uracil into DNA. A third possible mechanism is impaired purine synthesis and subsequent changes in DNA. Epidemiologic studies have indicated that high folate intake may protect against colorectal cancer but also against cancer at other sites. Current epidemiologic evidence of a relation between high folate intake and reduced breast cancer risk is, however, not conclusive. High folate intake was shown to be associated with a decreased risk of postmenopausal breast cancer in some studies, whereas this association was not confirmed in other studies. Other B vitamins are involved in folate metabolism. Vitamin B-12 acts as a cofactor to the enzyme methionine synthase, and both vitamin B-6 and riboflavin serve as cofactors for folate-dependent enzymes.
A previous report from the Malmö Diet and Cancer (MDC) study showed positive associations between BMI and breast cancer. Higher concentrations of endogenous sex hormones have been suggested to partly explain the increased risk of breast cancer among high consumers of alcohol. However, an elevated concentration of sex hormones is also a possible mechanism by which obesity can increase breast cancer risk. In the Swedish mammography cohort, overweight women with high dietary intakes of ascorbic acid had a lower incidence of breast cancer, but the risk relation was reversed in lean women. Ascorbic acid, an antioxidant that may protect against DNA damage, is mainly found in fruit and vegetables, as is folate.
The aim of this study was to investigate whether folate intake is associated with postmenopausal breast cancer in women from the MDC cohort. This article also evaluates whether other nutrients influence the association between folate intake and breast cancer incidence. Finally, the researchers wanted to examine whether the potential association between folate and breast cancer development was different among women who already had a higher breast cancer risk because they were overweight.
This prospective study included all women aged > 50 years (n = 11699) from the MDC cohort. The mean follow-up time was 9.5 years. The researchers used a modified diet-history method to collect nutrient intake data. At the end of follow-up, 392 incident invasive breast cancer cases were verified. The researchers used proportional hazard regression to calculate hazard ratios (HRs).
Compared with the lowest quintile, the incidence of invasive breast cancer was reduced in the highest quintile of dietary folate intake; total folate intake, including supplements; and dietary folate equivalents. Folate intake was negatively associated with the incidence of invasive breast cancer in the MDC cohort after adjustments for known risk factors, other B vitamins, and other potential confounders. The negative association remained after adjustment for intakes of fiber, carotene, or ascorbic acid.
In line with the initial hypothesis, an inverse association between folate intake and breast cancer incidence was observed in women with a BMI > 25, but no significant association was observed in normal-weight women. However, the power to detect an interaction between BMI and folate intake was low, and the test for interaction was not significant. Consequently, the researchers cannot exclude that the observed differences between strata were due to chance. No study has previously investigated whether the relation between folate intake and breast cancer is different between overweight women and normal-weight women. These findings of an association between high folate intake and lower breast cancer incidence confirm the biological hypothesis that low intakes of folate enhance the development of breast cancer. Folate in a reduced form, 5-MTHF, acts as a methyl donor in the synthesis of methionine, which is needed for the synthesis of SAM. SAM is the most important methyl donor in biological reactions, including DNA methylation. Changes in DNA methylation patterns are an early event in carcinogenesis and may influence gene silencing. Folate is also involved in DNA synthesis and repair; folate deficiency has been related to elevated uracil incorporation to DNA and subsequent chromosome breaks, which may contribute to an increased risk of cancer.
In summary, these results indicate that high folate intakes are associated with a lower incidence of postmenopausal breast cancer. These findings could not be explained by intakes of other nutrients found in the same foods as folate.
U. Ericson, E. Sonestedt, B. Gullberg, et al. High folate intake is associated with lower breast cancer incidence in postmenopausal women in the Malmo Diet and Cancer cohort. AJCN;86:434-443 (August 2007). [Correspondence: U Ericson, Nutritional Epidemiology, Clinical Research Center, Building 60, floor 13, Malmö University Hospital, entrance 72, SE-205 02 Malmö, Sweden. E-mail: email@example.com].
DIETARY PATTERNS RELATED TO GLYCEMIC INDEX AND LOAD AND RISK OF PREMENOPAUSAL AND POSTMENOPAUSAL BREAST CANCER IN THE WESTERN NEW YORK EXPOSURE AND BREAST CANCER STUDY
Many previous studies have investigated the association of glycemic index (GI) and glycemic load (GL) with breast cancer. GI reflects the effect of carbohydrates in individual foods on the postprandial glycemic response, whereas glycemic load (GL) includes both the GI and total carbohydrate intake; thus, it approximates the total glycemic effect of the diet given an adequate assessment of total diet. Dietary GI and GL can affect carbohydrate metabolism in vivo: high GI and GL have been associated with hyperinsulinemia, impaired glucose tolerance, and higher circulating insulin-like growth factor (IGF) concentrations. An increased risk of breast cancer is associated with hyperinsulinemia, high IGF concentrations, and GI and GL in a few, but not all, epidemiologic studies.
Because the glycemic effect of an individual food may not be representative of that of a mixed diet, the validity and utility of GI as a measure of the glycemic effect of diet on carbohydrate metabolism and related disease risk has been questioned. Construction of the GI relies on the availability of reliable food-composition data for foods contributing to carbohydrate intake in the target population. Although the GI has been widely studied, it is limited in that it does not take into account specific combinations of foods that might affect glycemic response. Thus, the extent to which food combinations might affect GI and GL and subsequent disease risk remains unclear.
Methodology, which takes advantage of the multidimensionality of food consumption, has been developed to describe patterns of food use that can be associated with chronic disease. Principal components analysis (PCA), the most widely used dietary patterns method, produces linear combinations of foods with the goal of identifying patterns that explain the largest variation in food use. However, PCA patterns do not necessarily correspond to a physiologic response. Recently, another statistical method, reduced rank regression (RRR), was developed to derive dietary patterns that predict a specific response such as a biomarker or a nutrient that has been previously associated with a chronic disease outcome. In this regard, RRR has an advantage over PCA in that the identified dietary patterns not only describe food use, but do so in association with a previously described disease risk factor. Given the inconsistent associations reported for GI and GL and breast cancer, and the methodologic concerns associated with GI as a risk factor, the researchers of this paper investigated the use of RRR to identify dietary patterns related to GI and GL and their association with pre- and postmenopausal breast cancer in the Western New York Exposure and Breast Cancer (WEB) Study.
RRR was used to identify dietary patterns predicting GI and GL from food-frequency data obtained in the Western New York Exposure and Breast Cancer Study (1166 cases, 2105 controls). Odds ratios (ORs) and 95% CIs were estimated with unconditional logistic regression, adjusted for energy and nondietary breast cancer risk factors. Sweets, refined grains, and salty snacks explained 34% of the variance in GI and 68% of the variance in GL. In general, breast cancer risks were not associated with GI, GL, or dietary pattern score. However, the researchers observed a significant reduction in postmenopausal breast cancer risk with GI and GL pattern scores combined, especially in women with a body mass index (in kg/m2) > 25. Conversely, in premenopausal women, increased risks were associated with high GL pattern scores only for women with a BMI 25.
The researchers had 2 main aims in the present study. The first was to explore the use of RRR to identify dietary patterns related to GI and GL. Using RRR, they identified refined grains, salty snacks, and added fats as most important in explaining both GI and GL. Although these 3 groups make intuitive sense in predicting GI and GL, none was independently associated with breast cancer risk in these data. In fact, although they present factor loadings for the foods most strongly related to each pattern, the advantage of RRR-derived dietary patterns is that all foods and food groups load on each pattern, albeit with varying strengths. Each subject is thus assigned a score representing the sum of the frequency of use of all foods and food groups weighted by the food-specific factor loading and can be subsequently ranked on the strength of adherence to a particular pattern of intake. Thus, the pattern scores represent a total dietary pattern of behavior rather than the consumption of one or more food groups. On the other hand, it is entirely possible that some other component of diet associated with GI or GL is responsible for any changes in risk observed with the GI- or GL-related dietary patterns.
The second goal was to examine associations with breast cancer risk for these dietary patterns and for GI and GL. Similar to several previous reports, they observed little association between breast cancer and GI or GL, whether expressed as an index or dietary pattern score, at least when the data were not stratified by BMI. In this regard, they might conclude that RRR dietary patterns derived from GI and GL showed no advantage over the simpler GI or GL indexes. However, among overweight premenopausal women, the GL dietary pattern was associated with a two-fold increase in risk, contrary to the lack of association observed with the GL index in their data. Because excess adiposity may be associated with hyperinsulinemia and hyperglycemia, it is possible that the GL dietary pattern provides a better estimate of the glycemic effect of the diet on breast cancer risk in overweight women because it emphasizes those foods most strongly associated with GL. Alternatively, the GL dietary pattern includes other dietary behaviors related to GL and may be more relevant to total exposure as it relates to breast cancer risk.
In conclusion, the researchers found that RRR may be useful in identifying dietary patterns associated with GI and GL, and that these patterns worked equally as well as did GI and GL in estimating the association of the dietary glycemic effect with breast cancer. Future studies are warranted to incorporate biomarkers of glycemic control, such as insulin, C-peptide, and Hb A1c, to further improve the identification of related foods and food groups. More targeted identification of specific dietary patterns could help in the development of guidelines to reduce chronic disease risk related to GI and GL.
S. McCann, W. McCann, C. Hong, et al. Dietary patterns related to glycemic index and load and risk of premenopausal and postmenopausal breast cancer in the Western New York Exposure and Breast Cancer Study. AJCN;86:465-471 (August 2007). [Correspondence: SE McCann, Department of Cancer Prevention and Control, Roswell Park Cancer Institute, Elm and Carlton Streets, Buffalo, NY 14263. E-mail: firstname.lastname@example.org].
Many surveys have shown that a large percentage of older adults do not receive recommended amounts of many nutrients from food alone. The Healthy Eating Index also indicates that the diets of older adults need improvement and may leave them susceptible to nutrition-related problems. At the same time, a growing proportion of older adults are using vitamin and mineral supplements, which can substantially increase nutrient intake and counter some of these shortcomings of their diets.
Although supplement use provides potential benefits in increasing nutrient intakes, there are potential drawbacks. The extensive use of supplements by older adults increases the possibility for overconsumption of nutrients. Considering the potential for both positive and negative effects on overall nutrient intake, an important question to ask is what factors influence supplement use. Therefore, some researchers from the US Department of Agriculture measured nutrient intake adequacy of vitamin/mineral supplement users and nonusers aged 51 years and older and determined the efficacy of supplement practices in compensating for dietary deficits and identified predictors of supplement use.
Data was used from the Continuing Survey of Food Intakes by Individuals (CSFII) and Diet and Health Knowledge Survey in 1994 to 1996. Analyses of two 24 hour recalls, demographic variables, and attitude questions were used in this study of 4384 adults aged 51 years and older. Data from the CSFII sample were used to determine nutrient adequacy of supplement users and nonusers, while data from the smaller Diet and Health Knowledge Survey sample were used to identify attitudinal and sociodemogaphic predictors of supplement use. Usual nutrient intake distributions were estimated using the Iowa State University method. The Estimated Average Requirement (EAR) cutpoint method was applied to determine the proportion of older adults not meeting requirements before and after accounting for nutrient intake from supplements.
There were 1777 daily supplement users, 428 infrequent users, and 2179 nonusers. A significantly smaller proportion of supplement users than nonusers had intakes from food alone below the EAR for vitamins A, B-6, and C, folate, zinc and magnesium. Nevertheless, less than 50% of both users and nonusers met the EAR for folate, vitamin E and magnesium from food sources alone. Overall, supplements improved the nutrient intake of older adults. After accounting for the contribution of supplements, 80% or more of users met the EAR for vitamins A, B-6, B-12, C and E, folate, iron and zinc. However, some supplement users, particularly men, exceeded tolerable upper intake levels for iron and zinc and a small percentage of women exceeded for vitamin A. Significant sociodemographic factors related to supplement use for older men were age group, metropolitan area, and educational status. Race, region, smoking status, and vegetarian status were significant factors for women. Attitude about the importance of following a healthful diet was a consistent predictor of supplement use for both men and women.
Supplements had a positive influence on nutrient adequacy for men and women aged 51 years and older. Whereas dietary modifications to improve intake are vital, the use of supplements by older adults appears beneficial to attain nutrient adequacy. To avoid exceeding the upper limit, this population should avoid the routine supplemental intake of certain nutrients including vitamin A and iron.
Rhonda S. Sebastian, Linda E. Cleveland, Joseph D. Goldman, et al. Older Adults Who Use Vitamin/Mineral Supplements Differ from Nonusers in Nutrient Intake Adequacy and Dietary Attitudes. JADA;107:1322-1332 (August 2007). [Correspondence: Rhonda S. Sebastian, MA, Nutritionist, US Department of Agriculture, Agricultural Research Service, Food Surveys Research Group, 10300 Baltimore Ave, Bldg 005, Room 102, BARC-West, Beltsville, MD 20705. E-mail: Rhonda.Sebastian@ars.usda.gov].
Foods prepared away from home, including food making up many college meal plans, are known to contain more calories and fat and less nutrients than foods prepared at home. College students decisions about what to eat are currently made in an environment where no nutrition labeling is required. There is some evidence that people use food-related knowledge to improve their diets, although the literature has focused primarily on food labels. There is a gap in the literature concerning whether knowledge of dietary guidelines translates into better eating behaviors, particularly among the high-risk college student age group. Therefore, a recent study focused on self-reported eating patterns of 200 college students and tired to identify how closely they reported following the Dietary Guidelines for Americans and if their eating patterns were related to their knowledge of dietary guidance.
The current cross-sectional study, part of a larger study on student food choice, used a convenience sample of 200 college students, who were on the university meal plan for at least two years. Students heights and weights were measured one time and their BMI was calculated and used to classify them as overweight (BMI > 25) or not (BMI < 25). To establish a baseline measure of perceived dietary intake, nutrition knowledge and basic demographic information, students completed an at-home Internet-based questionnaire. The questionnaire was based on the MyPyramid Food Guidance System and the US Department of Agriculture Diet and Health Knowledge Survey.
The subjects included 136 female and 64 male first-year college students. Thirty-seven percent reported being sedentary, 43.5% were moderately active, and 19.5% were active based on the MyPyramid classification system. Eighteen percent were classified as overweight. For the five major food categories, about one third of students reported eating the recommended amounts. This finding is consistent with previous studies using the Dietary Guidelines for Americans, 2000; however, it is unknown whether the new recommendations, provided in terms of cups for fruits, vegetables and dairy, and ounces for grains and protein, make a difference in the ability of students to report intake. It was observed that, for fruit, dairy, protein and whole grains, increased knowledge is related to increased likelihood of meeting dietary guidelines. In addition, when asked about individual food choices, nutrition knowledge was related to making more healthful choices in every case.
Ultimately, increased knowledge of dietary guidelines appears to be positively related to more healthful eating patterns. This suggest that guidelines such as the Dietary Guidelines for Americans 2005, in conjunction with effective public-awareness campaigns, may be a useful mechanism for promoting change in what foods consumers choose to eat.
Jane Kolodinsky, Jean Ruth Harvey-Berino, Linda Berlin, et al. Knowledge of Current Dietary Guidelines and Food Choice by College Students: Better Eaters Have Higher Knowledge of Dietary Guidance. JADA;107:1409-1413 (August 2007). [Correspondence: Jane Kolodinsky, PhD, Professor and Chair, Department of Community Development and Applied Economics, University of Vermont, 205 Morrill Hall, Burlington, VT 05405. E-mail: Jane.Kolodinsky@uvm.edu].
The identification of new foodborne diseases, ongoing changes in the food supply, publicized foodborne illness outbreaks, an aging population, more people living with compromised immune systems and higher rates of foodborne illness among at-risk populations have all contributed to the need for nutrition and health professionals who serve at-risk populations to provide improved education about food safety issues.
Continuing education traditionally has occurred through workshops and professional conferences, but with the growing popularity of the Internet, online courses have emerged as a convenient way to earn continuing education credits. They are convenient, efficient, and affordable. However, it is important for the program to be of high quality, the content should be well organized and easy to follow, and well-designed course evaluations should be conducted to determine the effectiveness of online courses. A recent study in JADA developed and evaluated an online continuing education course designed for dietetics professionals, nurses, and extension educators who provide nutrition and health information to persons at high risk for foodborne illness.
"Food Safety for High Risk Populations" was adapted from a graduate level class taught at three universities. The graduate-level class was converted into six Web-based modules (overview of foodborne illness, immunology, pregnancy, human immunodeficiency virus, cancer and transplants, and lifecycle) and offered to 140 nutrition and health professionals. The subjects had eight weeks to complete the modules, pre and post questionnaires, and course evaluation. Change in knowledge was measured using the pre and post questionnaires and course efficacy was evaluated with the post-course questionnaire.
More participants in this study were registered dietitians or dietetic technicians (45%), than any other profession. For each module, knowledge scores increased significantly from pre to post questionnaire. The largest positive change in pre to post scores occurred for the modules on immunology and cancer. Overall, knowledge scores increased from 67.3% before the modules to 91.9% afterwards. Course evaluation responses were favorable and the subjects indicated that course objectives were met. The participants liked the self-paced natures of the course and the flexibility of being able to complete the modules from work or home at any time.
The data shows that improved scores among all professional groups after completing the online course indicate that the course was effective in enhancing knowledge, which has positive implications for future Internet-based courses. Factors that limit the generalizability of this study included the self-selected convenience sample used and the uneven distribution of professional categories within the study population (that is, only nine nurses completed the course). The course has since been made available online for a fee for six continuing education credits from the American Dietetic Association, Ohio Nurses Association or American Association of Family and Consumer Sciences.
Stephanie Wallner, Patricia Kendall, Virginia Hillers, et al. Online Continuing Education Course Enhances Nutrition and Health Professionals Knowledge of Food Safety Issues of High-Risk Populations. JADA;107:1333-13338 (August 2007). [Correspondence: Patricia Kendall, PhD, RD, Department of Food Science and Human Nutrition, Colorado State University, Fort Collins, CO 80523-1571. E-mail: Particia.Kendall@colostate.edu].
Carl Rogers developed the client-centered (now referred to as the person-centered) approach to counseling in the 1940s. This approach is nondirectived and focused on the concerns of the patient. The therapist acts as a facilitator, helping clients to explore their feelings and attitudes related to their problem areas. As clients explore these issues, they gradually come to a better understanding of themselves and thus are able to resolve the problems based on this new insight. For this process to take place, the counselor must provide an environment where clients are given the safety and freedom to explore their own issues and to experience their own feelings.
The client-centered theory of counseling has evolved and many health professions have adopted it as consumers have become more vocal and more interested in taking responsibility for their own health. Many researchers in the dietetics profession have recommended the use of a client-centered approach to counseling and, in Canada, the client-centered approach is considered one of the core concepts of dietetic practice. However, a recent review of the dietetics literature revealed that there has been little discussion about what that means from the perspective of the practicing dietitian. Therefore, a group of Canadian researchers explored dietitians understanding of the client-centered approach to nutrition counseling.
In-depth qualitative interviews were conducted with 25 Canadian dietitians from a variety of practice areas. All subjects were involved in individualized nutrition counseling and were self-identified as having advanced-level counseling skills. Interview transcripts were analyzed using a form of inductive, thematic analysis.
Results suggest that although participants believe that practicing in a client-centered manner is important, they were struggling in their attempt to balance their practice values and beliefs with the realities of their work environments. Many voiced a concern that clients do not always recognize what they "need" to know and may need more information they "want" to have in order for them to make an informed decision about their nutrition care. Several participants indicated that they always design their service to meet client needs, but not necessarily wants. There also appeared to be some indecision around who determines these needs and what the difference is between needs and wants. The expert-defined needs were considered by most dietitians to be most important, and it is a struggle to find a balance between meeting these "real" medical/health needs and "perceived" client needs that appear to be causing some concern.
Few dietitians would argue that providing client-centered care is not important. However, the findings of this study suggest that this is a complex process, influenced by the context of the situation. Being an effective counselor involves more than just good communication skills and having expert nutrition knowledge. It is also important to understand how to develop a therapeutic relationship with clients and actively respect the knowledge and skills that they bring to that relationship.
Debbie MacLellan and Shawna Berenbaum. Canadian Dietitians Understanding of the Client-Centered Approach to Nutrition Counseling. JADA;107:1414-1417 (August 2007). [Correspondence: Debbie MacLellan, PhD, RD, Department of Family and Nutritional Sciences, University of Prince Edward Island, 550 University Ave, Charlottetown, PE C1A 4P3, Canada. E-mail: email@example.com].
High protein intake has been shown to increase renal plasma flow and not increase the glomerular filtration rate (GFR). It is accepted that glomerular hyperfiltration can cause progressive kidney damage in people who already suffer from kidney disease. However, there is controversy over the effects of hyperfiltration in those with normal kidney function. In addition, protein-rich diets produce an obvious acid load resulting from the metabolism of sulfur-containing amino acids. To maintain acid-base homeostasis, the kidney increases its excretion of the acid load, mainly via enhanced excretion of ammonium. Because aging is associated with a decline in kidney function in the form of decreased GFR and decreased ability to excrete acid, a recent study hypothesized that short-term exposure to a diet high in protein content would lead to a higher degree of metabolic acidosis in older versus younger subjects.
The subjects included 24 healthy men and women either between the ages of 25 and 40 years (n = 12) or 55 and 70 years (n = 10). Each subject participated in a six week crossover study consisting of two periods. Both periods were comprised of two weeks of washout followed by the one week experimental diet. During the washout weeks, subjects consumed their usual diets. The two experimental diets consisted of a high-protein (2 g/kg/day) or low-protein (0.5 g/kg/day) diet and the meals were proved by the metabolic kitchen at the Cincinnati Childrens Hospital Medical Centers General Clinical Research Center. Fasting blood samples were drawn on the first day of each experimental diet week and the right after the end of each experimental diet. Blood samples were drawn to measure blood urea nitrogen, creatinine, sodium, potassium, phosphorus, calcium, and carbon dioxide.
The older group, mainly women, showed an increase in GFR after the high-protein diet as compared to the low protein diet. Urinary pH was significantly lower and ammonium excretion was significantly higher after the high-protein diet in both age groups, but neither group developed a clinically detectable acidosis after the week of receiving a high-protein diet.
The results showed that one week of a high-protein diet altered kidney filtration differently in older versus younger individuals, which likely is related to baseline GFR and sex. The strength of this study is the use of metabolic meals to control protein content of the experimental diets, but it should be noted that the study had a small sample size. Although one week of high-protein diet increased acid excretion and resulted in a mild reduction of serum bicarbonate in both older and younger individuals, long-term studies are needed to ascertain the effects of a high-protein experimental diet on systemic acid-base balance, specifically in older adults.
Erin A. Wagner and Grace A. Falciglia. Short-Term Exposure to a High Protein Diet Differentially Affects Glomerular Filtration Rate but Not Acid-Base Balance in Older Compared to Younger Adults. JADA;107:1404-1408 (August 2007). [Correspondence: Grace A. Falciglia, PhD, RD, University of Cincinnati Medical Center, 3202 Eden Ave, Cincinnati, OH 45267-0394. E-mail: firstname.lastname@example.org].
The food frequency questionnaire (FFQ) is often the method used for assessing nutrient intake in epidemiologic studies. The underlying principle of the FFQ approach is that the average long-term diet, such as consumption patterns over weeks, months or years, is theoretically a more relevant determinant of chronic disease than intake on a few specific days. In addition, in several studies of their validity, FFQs have been found to be reasonably accurate and are inexpensive to administer and process.
The assessment of dietary calcium intake is of interest when studying bone health in population groups because dietary calcium has long been considered to play a role in the development of age-related osteoporosis. Several validation studies using FFQs designed to specifically assess calcium intake have been completed. To determine the ability of an FFQ to measure intakes of calcium and calcium-related nutrients in rural populations, Osowski et al. compared estimates of intakes of dietary calcium and bone-related nutrients using the Willett semi-quantitative FFQ, which has been previously validated in a population of male health professionals, with results of four 24 hour dietary recalls obtained during a previous one-year span.
Eighty-one subjects from a longitudinal study of lifestyle factors on bone mass accretion (the South Dakota rural Bone Health Study) were used in this study. All subjects completed the Willett 97GP 2003 version self-administered semiquantitative FFQ. The subjects also completed four 24-hour dietary recalls within the previous year. Calcium and bone-related nutrient intakes were expressed as milligrams per day, milligrams per 1,000 kcal, or quartiles.
Calcium intakes from FFQ and recalls were 1287 and 1141 mg/day, but calcium per 1000 kcal did not differ. The average nutrient intake calculated from the FFQ was statistically greater than that calculated from the average of the 24-hour recalls for all nutrients except total energy and fat. Calcium intake by FFQ correlated with intake by recall when expressed as mg/day or mg/1000 kcal. Bland-Altman graphs indicated fairly good agreement between methods. Seventy-eight percent of subjects fell into the same or within one quartile category when calcium intake was expressed as mg/day and 83% when expressed as mg/1,000 kcal. Gross misclassification occurred in 0 to 4% of the nutrients.
The significant differences in mean intakes between the two methods, and the fact that the differences in the mean intake varied across intake levels indicates that the FFQ may not be a valid indicator of an individuals intake. However, the FFQ does adequately classify rural populations into quartiles of calcium and bone-related nutrient intakes, making it a useful tool for assessing dietary calcium and bone related intake in rural populations.
Jane M. Osowski, Tianna Beare, and Bonny Specker. Validation of a Food Frequency Questionnaire for Assessment of Calcium and Bone-Related Nutrient Intake in Rural Populations. JADA;107:1349-1355 (August 2007). [Correspondence: Bonny Specker, PhD, EA Martin Program in Human Nutrition, Box 2204, EAM Bldg, South Dakota State University, Brookings, SD 57007. E-mail: email@example.com].
Heterocyclic amines (HCAs) are formed in meat, chicken, and fish through reactions of natural components of muscle tissue. Meat type, cooking method, and cooking duration and temperature are among factors known to influence the extent to which HCAs form in cooked meats. HCAs are effective carcinogens in laboratory mice and rats, and their intake has been associated with increased cancer risk in humans.
Studies of the health effects of HCA intake have relied on various methods of dietary assessment to determine intake of meat (beef and pork), chicken and fish, the primary sources of HCA in the diet. The most common method has been the FFQ which has been supplemented with questions about cooking method and doneness preferences. However, this might not accurately assess HCA intake. In addition, seasonal differences in the consumption of certain foods and use of HCA-forming cooking methods (BBQ/grilling) can lead to differential recall of meat intake when assessed with a general commonly used FFQ administered at different times of the year.
To supplement dietary survey instruments that do not account for HCA-related factors, a meat frequency questionnaire (MFQ) was developed. The purpose of a recent study in JADA was to examine the consistency of the MFQ to report meat intake as a fraction of that reported by the FFQ. Keating et al. also sought to evaluate the intake of specific meat items in the MFQ that are not specifically queried in the FFQ.
Three-hundred fourteen African American males participating in a clinic-based study of prostate disease and HCA intake were included in this study. The men were administered the two questionnaires in a cancer education center prior to undergoing screening evaluations for prostate disease. Fried, broiled, and grilled versus total meat intake was assessed using the MFQ versus FFQ, respectively. Specific meat items included in the MFQ were evaluated as factors potentially explaining discrepancies in meat intake estimated using the two questionnaires. Seasonal variation in meat intake was also examined.
Caloric intake estimated by the FFQ exceeded that estimated in both national dietary recalls, whereas daily meat consumption obtained with FFQ was less than that obtained in the national survey. Meat intakes determined by the two questionnaires were well correlated. However, total meat assessed by the MFQ exceeded total meat assessed by the FFQ in 30% of the men. Total caloric and intake of HCA-associated meat were greatest when the MFQ was administered during winter months.
The MFQ provided a fractional measure of total meat intake and identified specific HCA-associated meat items underreported in the FFQ. HCA concentrations in cooked meats can vary up to 10-fold, depending on meat type and how it is cooked, so omission of certain food items from the FFQ can contribute to underestimates of HCA intake well beyond their gram weight contribution to the overall diet. Consultation with a nutritional expert on the African-American diet helped the authors to include meat patty in the MFQ. Therefore, to estimate HCA intake in other populations with unique dietary habits and foods, different questions about unique meat intake may be needed in the MFQ.
Garrett A. Keating, Kenneth T. Bogen, June M. Chan. Development of a Meat Frequency Questionnaire for Use in Diet and Cancer Studies. JADA;107:1356-1362 (August 2007). [Correspondence: Garrett A. Keating, PhD, Lawrence Livermore National Laboratory, Box 808, L-396, Livermore, CA 94551. E-mail: firstname.lastname@example.org].
The relationship between dietary composition (the contribution of fat, carbohydrate and protein to energy intake) with weight and prevalence of obesity is of great interest to researchers and to the public health community. A recent review suggests that in short-term controlled settings high protein, moderate or low-carbohydrate diets result in more weight loss than traditional low-fat, high-carbohydrate diets. However, observational studies suggest a somewhat different role of dietary composition to long-term weight status (a low-fat diet is more likely in those who lose and maintain their weight loss).
In the United States, there are clear differences in the rates of overweight and obesity between non-Hispanic white (58% and 31%, respectively) and Hispanic women (72% and 40%, respectively). Since there is little information in the literature on the potential contribution of dietary macronutrient composition or dietary pattern to the rates of obesity in Hispanic and non-Hispanic white women in the United States, a recent study described the common dietary practices of these women and examined whether diet composition was associated with overweight and obesity in Hispanic and non-Hispanic white women.
This study included 871 Hispanic and 1599 non-Hispanic white women from the southwestern United States. The dietary data was taken from the 4-Corners Breast Cancer Study in 2000 to 2006, where the women completed a diet history questionnaire, detailed physical activity history and a medical and reproductive history. Anthropometric measurements including height and weight were measured by a certified staff member. BMI was also calculated and the women were categorized as normal weight (< 25), overweight (25 to 29.9), or obese (> 30).
More non-Hispanic white women were normal weight and fewer were obese when compared with Hispanic women. Hispanic women reported consuming more energy, a greater proportion of calories from fat and vegetable protein, less alcohol, and less calories from animal protein when compared with non-Hispanic white women. Western (included high-fat foods with refined grains and fast foods) dietery patterns were associated with higher prevalence of overweight and obesity; the Prudent dietary pattern (included low-fat foods, whole grain products, fruits vegetables and nuts) was associated with a 29% lower prevalence of overweight and a halving of the prevalence of obesity similarly in Hispanic and non-Hispanic white women. Higher proportions of calories from protein and animal protein were associated with a greater risk of overweight, while greater proportions of energy from fat, protein, or animal protein were associated with higher risk of obesity among non-Hispanic white women only.
The findings of this study showed that a Western dietary pattern was associated with greater risk and a Prudent diet with reduced risk of overweight and obesity. To reduce risk of overweight and obesity, Hispanic women should maintain healthful aspects of a native Hispanic diet, and non-Hispanic white women should replace animal protein with vegetable protein.
Maureen A. Murtaugh, Jennifer S. Herrick, Carol Sweeney, et al. Diet Composition and Risk of Overweight and Obesity in Women Living in the Southwestern United States. JADA;107:1311-1321 (August 2007). [Correspondence: Maureen A. Murtaugh, PhD, RD, AC 230 SOM, 30 N 1900 E Salt Lake City, UT 84132. E-mail: Maureen.email@example.com].
The direct cost of diabetes in the United States was $91.8 billion in 2002. The cost of overweight and obesity was equally high. As prevalence of both diabetes and obesity increases, so does the human and financial burden of these conditions.
Lifestyle treatment (diet and physical activity) is the foundation of treatment for both type 2 diabetes and obesity. Despite this, healthy systems have generally not integrated lifestyle treatment into clinical practice or reimbursed for nutrition services. The resource burden of some lifestyle treatments demonstrated in efficacy trials may be too great for patients, clinicians and health care systems to sustain. Wolf et al. have previously reported that a modestly priced, registered dietician (RD)-led case management approach to lifestyle modification was more effective than usual medical care for improving clinical and health-related quality of life outcomes and decreasing self-reported prescription medication use of patient with obesity and type 2 diabetes. This study evaluates the within-trial program costs and economic outcomes associated with a one year lifestyle intervention led by an RD lifestyle case manager.
The Improving Control with Activity and Nutrition (ICAN) study was a randomized controlled trial (RCT) conducted from 2001 to 2003. One hundred and forty-seven health plan members with obesity and type 2 diabetes were included in the study. Lifestyle case management entailed individual and group education, support and referrals by registered dietitians. These subjects received the Lifestyle, Exercise, Attitudes, Relationships, Nutrition (LEARN) manual. Usual care subjects, the control group, received written educational material including the LEARN manual. Program costs were calculated by applying standard unit costs to the resources used and direct medical costs typically represent expenditures for medical services and products that are usually paid for by health systems and include costs of hospitalization, urgent care, outpatient car, laboratory tests and procedures.
Net cost of the intervention was $328 per person per year. After incorporating program costs, mean health plan costs were $3586 lower in case management compared to usual care. The difference was driven by group differences in medical but not pharmaceutical costs, with fewer inpatient admissions and costs among case management compared with usual care.
The study found that the addition of a clinically feasible, modest-cost lifestyle intervention, involving an RD as a lifestyle case management for a high-risk obese population at best saved $8046 per person per year and at worst did not increase health cares costs compared with usual medical care. Larger trials are needed to determine whether these results can be replicated in a broader population, since the small size was restricted to insured participants who were mainly white and employed. Food and nutrition professionals can use these results to judiciously support the cost neutrality and effectiveness of their services as an integral component of medical care.
Anne M. Wolf, Mir Siadaty, Beverly Yaeger, et al. Effects of Lifestyle Intervention on Health Care Costs: Improving Control with Activity and Nutrition (ICAN). JADA;107:1365-1373 (August 2007). [Correspondence: Anne M. Wolf, MS, RD, Instructor of Research, Department of Public Health Sciences, 1710 Allied St, Suite 34, Charlottesville, VA 22903. E-mail: firstname.lastname@example.org].
To find out more about Technical Insights and our Alerts, Newsletters, and Research Services, access http://ti.frost.com/
To comment on these articles, write to us at email@example.com
You can call us at: North America : +1-210-247-3877, London : +44-0-1865-398680, Chennai: +91-44-42005820, Singapore : +65-68900949.
Use of this information is determined by license
agreement; any unauthorized use is prohibited. Copyright 2007, Frost &
|(This page is best viewed with Internet Explorer 6.0 at a minimum screen resolution 800 by 600.)|