free full text journal articles: medical informatics and bioinformatics




Recent Articles in Journal of Medical Internet Research

Valenzuela JI, Arguello A, Cendales JG, Rizo CA
Web-based asynchronous teleconsulting for consumers in Colombia: a case study.
J Med Internet Res. 2007;9(4):e33.
BACKGROUND: Fourteen years after the reform to Colombia's health system, the promises of universality, improved equity, efficiency, and better quality of care have not materialized. Remote areas remain underserved and access to care very limited. Recognizing teleconsultation as an effective way to improve access to health care and health information, a noncommercial open-access Web-based application for teleconsultation called Doctor Chat was developed. OBJECTIVE: The objective was to report the experience of the Center for Virtual Education and Simulation eHealth (Centro de Educación Virtual y Simulación e-Salud) with open-access Web-based asynchronous teleconsultation for consumers in Colombia. METHODS: A teleconsultation service in Spanish was developed and implemented in 2006. Teleconsultation requests were classified on three axes: (1) the purpose of the query, (2) the specialty, and (3) the geographic area of the query. Content analysis was performed on the free-text queries submitted to Doctor Chat, and descriptive statistics were gathered for each of the data categories (name, email, city, country, age, and gender). RESULTS: From September 2006 to March 2007, there were 270 asynchronous teleconsultations documented from 102 (37.8%) men and 168 (62.2%) women. On average, 1.4 requests were received per day. By age group, the largest number of requests (n = 80; 30%) were from users 24-29 years, followed by users (n = 66; 24%) 18-23 years. Requests were mainly from Colombia (n = 204; 75.6%) but also from Spain (n = 17; 6.3%), Mexico (n = 11; 4.1%), and other countries. In Colombia, 137 requests (67.2%) originated in Bogotá, the nation's capital, 25 (12.4%) from other main cities of the country, 40 (19.7%) from intermediate cities, and 2 (0.7%) from remote areas. The purpose of the majority of requests was for information about symptoms, health-related problems, or diseases (n = 149; 55.2%) and medications/treatments (n = 70; 25.9%). By specialty, information was most requested for gynecology and obstetrics (n = 71; 26%), dermatology (n = 28; 10%), urology (n = 22; 8%), and gastroenterology (n = 18; 7%), with anesthesiology, critical care, physical medicine and rehabilitation, and pathology being the least requested (n = 0; 0%). Overall, sexual and reproductive health (n = 93; 34%) issues constituted the main query subject. The average time to deliver a response was 120 hours in 2006 and 59 hours in 2007. Only 19 out of 270 users (7%) completed a survey with comments and perceptions about the system, of which 18 out of 19 (95%) corresponded to positive perceptions and 1 out of 19 (5%) expressed dissatisfaction with the service. CONCLUSION: The implementation of a Web-based teleconsulting service in Colombia appeared to be an innovative way to improve access to health care and information in the community and encouraged open and explicit discussion. Extending the service to underserved areas could improve access to health services and health information and could potentially improve economic indicators such as waiting times for consultations and the rate of pregnancy among teenagers; however, cultural, infrastructural, and Internet connectivity barriers are to be solved before successful implementation can derive population-wide positive impacts. [Abstract/Link to Full Text]

Eysenbach G
Poverty, human development, and the role of eHealth.
J Med Internet Res. 2007;9(4):e34. [Abstract/Link to Full Text]

Masum H, Singer PA
A visual dashboard for moving health technologies from "lab to village".
J Med Internet Res. 2007;9(4):e32.
New technologies are an important way of addressing global health challenges and human development. However, the road for new technologies from "lab to village" is neither simple nor straightforward. Until recently, there has been no conceptual framework for analyzing and addressing the myriad forces and issues involved in moving health technologies from the lab to those who need them. Recently, based on empirical research, we published such a model. In this paper, we focus on extending the model into a dashboard and examine how this dashboard can be used to manage the information related to the path from lab to village. The next step will be for groups interested in global health, and even the public via the Internet, to use the tool to help guide technologies down this tricky path to improve global health and foster human development. [Abstract/Link to Full Text]

Kontos EZ, Bennett GG, Viswanath K
Barriers and facilitators to home computer and internet use among urban novice computer users of low socioeconomic position.
J Med Internet Res. 2007;9(4):e31.
BACKGROUND: Despite the increasing penetration of the Internet and amount of online health information, there are significant barriers that limit its widespread adoption as a source of health information. One is the "digital divide," with people of higher socioeconomic position (SEP) demonstrating greater access and usage compared to those from lower SEP groups. However, as the access gap narrows over time and more people use the Internet, a shift in research needs to occur to explore how one might improve Internet use as well as website design for a range of audiences. This is particularly important in the case of novice users who may not have the technical skills, experience, or social connections that could help them search for health information using the Internet. The focus of our research is to investigate the challenges in the implementation of a project to improve health information seeking among low SEP groups. The goal of the project is not to promote health information seeking as much as to understand the barriers and facilitators to computer and Internet use, beyond access, among members of lower SEP groups in an urban setting. OBJECTIVE: The purpose was to qualitatively describe participants' self-identified barriers and facilitators to computer and Internet use during a 1-year pilot study as well as the challenges encountered by the research team in the delivery of the intervention. METHODS: Between August and November 2005, 12 low-SEP urban individuals with no or limited computer and Internet experience were recruited through a snowball sampling. Each participant received a free computer system, broadband Internet access, monthly computer training courses, and technical support for 1 year as the intervention condition. Upon completion of the study, participants were offered the opportunity to complete an in-depth semistructured interview. Interviews were approximately 1 hour in length and were conducted by the project director. The interviews were held in the participants' homes and were tape recorded for accuracy. Nine of the 12 study participants completed the semistructured interviews. Members of the research team conducted a qualitative analysis based on the transcripts from the nine interviews using the crystallization/immersion method. RESULTS: Nine of the 12 participants completed the in-depth interview (75% overall response rate), with three men and six women agreeing to be interviewed. Major barriers to Internet use that were mentioned included time constraints and family conflict over computer usage. The monthly training classes and technical assistance components of the intervention surfaced as the most important facilitators to computer and Internet use. The concept of received social support from other study members, such as assistance with computer-related questions, also emerged as an important facilitator to overall computer usage. CONCLUSIONS: This pilot study offers important insights into the self-identified barriers and facilitators in computer and Internet use among urban low-SEP novice users as well as the challenges faced by the research team in implementing the intervention. [Abstract/Link to Full Text]

Patterson V, Swinfen P, Swinfen R, Azzo E, Taha H, Wootton R
Supporting hospital doctors in the Middle East by email telemedicine: something the industrialized world can do to help.
J Med Internet Res. 2007;9(4):e30.
BACKGROUND: Since 1999, the Swinfen Charitable Trust has operated an email referral system between doctors in the developing world and specialists in the industrialized world. Since 2001, it has expanded its operation into the Middle East, in particular Iraq, an area of considerable conflict. OBJECTIVES: The aim was to compare referral patterns to the Trust from the Middle East with those received from the rest of the developing world and to look for qualitative evidence of health gain. METHODS: We analyzed referrals to the Swinfen Charitable Trust between July 2004 and June 2007 and compared these by speciality with those received from elsewhere during the same 3-year period. We asked two referring doctors for their views of the process, and we analyzed the total Middle Eastern referrals made to a single specialty (neurology). RESULTS: Between July 2004 and June 2007, 283 referrals were received from four countries in the Middle East (Iraq, Afghanistan, Pakistan, Kuwait) and 500 cases were received from 22 other countries. The 283 cases resulted in 522 separate queries to specialists. The median time to specialist reply for the queries relating to the 283 Middle Eastern cases was 24.3 hours (interquartile range 6.1-63.3). There was a significant difference in case mix between the Middle East and the rest of the world (P < .001), with more obstetric referrals and fewer referrals in medical specialties and radiology. The referring doctors were helped greatly by the service. The neurologist was confident of the diagnosis in 20 of 26 referrals received (77%). Both referring doctors and the specialist were able to cite referred cases where management was improved as a result of the service. CONCLUSIONS: Email telemedicine can be used in areas of conflict such as the Middle East. Perhaps surprisingly, trauma referrals are not increased but obstetric referrals are. Supporting individual doctor-patient encounters in this way is therefore often beneficial and is easily expandable. As well as improving care for individuals, email telemedicine provides effective case-based learning for local doctors, leading to improved care for subsequent similar patients. [Abstract/Link to Full Text]

Fraser HS, Allen C, Bailey C, Douglas G, Shin S, Blaya J
Information systems for patient follow-up and chronic management of HIV and tuberculosis: a life-saving technology in resource-poor areas.
J Med Internet Res. 2007;9(4):e29.
BACKGROUND: The scale-up of treatment for HIV and multidrug-resistant tuberculosis (MDR-TB) in developing countries requires a long-term relationship with the patient, accurate and accessible records of each patient's history, and methods to track his/her progress. Recent studies have shown up to 24% loss to follow-up of HIV patients in Africa during treatment and many patients not being started on treatment at all. Some programs for prevention of maternal-child transmission have more than 80% loss to follow-up of babies born to HIV-positive mothers. These patients are at great risk of dying or developing drug resistance if their antiretroviral therapy is interrupted. Similar problems have been found in the scale-up of MDR-TB treatment. OBJECTIVES: The aim of the study was to assess the role of medical information systems in tracking patients with HIV or MDR-TB, ensuring they are promptly started on high quality care, and reducing loss to follow-up. METHODS: A literature search was conducted starting from a previous review and using Medline and Google Scholar. Due to the nature of this work and the relative lack of published articles to date, the authors also relied on personal knowledge and experience of systems in use and their own assessments of systems. RESULTS: Functionality for tracking patients and detecting those lost to follow-up is described in six HIV and MDR-TB treatment projects in Africa and Latin America. Preliminary data show benefits in tracking patients who have not been prescribed appropriate drugs, those who fail to return for follow-up, and those who do not have medications picked up for them by health care workers. There were also benefits seen in providing access to key laboratory data and in using this data to improve the timeliness and quality of care. Follow-up was typically achieved by a combination of reports from information systems along with teams of community health care workers. New technologies such as low-cost satellite Internet access, personal digital assistants, and cell phones are helping to expand the reach of these systems. CONCLUSIONS: Effective information systems in developing countries are a recent innovation but will need to play an increasing role in supporting and monitoring HIV and MDR-TB projects as they scale up from thousands to hundreds of thousands of patients. A particular focus should be placed on tracking patients from initial diagnosis to initiation of effective treatment and then monitoring them for treatment breaks or loss to follow-up. More quantitative evaluations need to be performed on the impact of electronic information systems on tracking patients. [Abstract/Link to Full Text]

Saul JE, Schillo BA, Evered S, Luxenberg MG, Kavanaugh A, Cobb N, An LC
Impact of a statewide Internet-based tobacco cessation intervention.
J Med Internet Res. 2007;9(3):e28.
BACKGROUND: An increasing number of people have access to the Internet, and more people are seeking tobacco cessation resources online every year. Despite the proliferation of various online interventions and their evident acceptance and reach, little research has addressed their impact in the real world. Typically, low response rates to Internet-based follow-up surveys generate unrepresentative samples and large confidence intervals when reporting results. OBJECTIVES: The aim of this study was to achieve a high response rate on follow-up evaluation in order to better determine the impact of an Internet-based tobacco cessation intervention provided to tobacco users in Minnesota, United States. METHODS: Participants included 607 men and women aged 18 and over residing in Minnesota who self-reported current tobacco use when registering for an Internet-based tobacco cessation program between February 2 and April 13, 2004. Participants were given access to an interactive website with features including social support, expert systems, proactive email, chat sessions, and online counselors. Mixed-mode follow-up (online survey with telephone survey for online nonrespondents) occurred 6 months after registration. RESULTS: Of the study participants, 77.6% (471/607) responded to the 6-month follow-up survey (39.4% online and 38.2% by telephone). Among respondents, 17.0% (80/471, 95% CI = 13.6%-20.4%) reported that they had not smoked in the past 7 days (observed rate). Assuming all nonrespondents were still smoking (missing=smoking rate), the quit rate was 13.2% (80/607, 95% CI = 10.5%-15.9%). CONCLUSIONS: This mixed-mode follow-up survey of an online smoking cessation program achieved a high response rate and provides a more accurate estimate of long-term cessation rates than has been previously reported. Quit rates for the Internet-based tobacco cessation program were higher than those expected for unassisted quit attempts and are comparable to other evidence-based behavioral interventions. The similarities between quit rates demonstrates that an Internet-based cessation program may have as great an impact as, and can have wider reach than, other cessation programs such as those delivered by telephone. With over 100000 people having visited the website and over 23000 having registered, a 6-month self-reported quit rate of 13.2% suggests that the program helped over 3000 Minnesotans remain tobacco free for at least 6 months. Results of this study suggest that an Internet-based cessation program is a useful tool in states' efforts to provide comprehensive cessation tools for smokers. [Abstract/Link to Full Text]

Woodall WG, Buller DB, Saba L, Zimmerman D, Waters E, Hines JM, Cutter GR, Starling R
Effect of emailed messages on return use of a nutrition education website and subsequent changes in dietary behavior.
J Med Internet Res. 2007;9(3):e27.
BACKGROUND: At-risk populations can be reached with Web-based disease prevention and behavior change programs. However, such eHealth applications on the Internet need to generate return usage to be effective. Limited evidence is available on how continued usage can be encouraged. OBJECTIVE: This analysis tested whether routine email notification about a nutrition education website promoted more use of the website. METHODS: Adults from six rural counties in Colorado and New Mexico, United States (n = 755) participating in a randomized trial and assigned to the intervention group (n = 380) received, over a period of 4 months, email messages alerting them to updates on the website, along with hyperlinks to new content. Update alerts were sent approximately every 5 weeks (each participant received up to 4 messages). Log-ons to the website were the primary outcome for this analysis. RESULTS: A total of 23.5% (86/366) of the participants responded to at least one email, and 51.2% (44/86) of these participants responded to half of the email messages by logging on to the website. Significantly more log-ons occurred on email notification days compared to all other days (OR = 3.71, 95% CI = 2.72-5.06). More log-ons also occurred just after the notification but declined each day thereafter (OR = 0.97, 95% CI = 0.96-0.98 one day further from mass email). Non-Hispanics (OR = 0.46, 95% CI = 0.26-0.84), older participants (OR = 1.04, 95% CI = 1.04-1.06), and those using the Internet most recently (OR = 0.62, 95% CI = 0.51-0.77) were more likely to log on. Responders to the messages had a more positive change in fruit and vegetable intake (mean change = +1.69) than nonresponders (+0.05), as measured with a food frequency assessment (adjusted Spearman partial correlation coefficient = 0.14, P = .049). Compared to nonresponders, responders were more likely to be non-Hispanic (P = .01), older (P < .001), and had used the Internet more recently (P < .001). CONCLUSIONS: Messages sent by email appeared to promote a modest short-lived increase in use of a disease prevention website by some adults. Those who responded to the messages by logging on to the website may have been influenced to improve their diet. [Abstract/Link to Full Text]

van den Berg MH, Schoones JW, Vliet Vlieland TP
Internet-based physical activity interventions: a systematic review of the literature.
J Med Internet Res. 2007;9(3):e26.
BACKGROUND: Nowadays people are extensively encouraged to become more physically active. The Internet has been brought forward as an effective tool to change physical activity behavior. However, little is known about the evidence regarding such Internet-based interventions. OBJECTIVE: The aim of the study was to systematically assess the methodological quality and the effectiveness of interventions designed to promote physical activity by means of the Internet as evaluated by randomized controlled trials. METHODS: A literature search was conducted up to July 2006 using the databases PubMed, Web of Science, EMBASE, PsycINFO, and Cochrane Library. Only randomized controlled trials describing the effectiveness of an Internet-based intervention, with the promotion of physical activity among adults being one of its major goals, were included. Data extracted included source and year of publication, country of origin, targeted health behaviors, participants' characteristics, characteristics of the intervention, and effectiveness data. In addition, the methodological quality was assessed. RESULTS: The literature search resulted in 10 eligible studies of which five met at least nine out of 13 general methodological criteria. The majority of the interventions were tailored to the characteristics of the participants and used interactive self-monitoring and feedback tools. Six studies used one or more theoretical models to compose the contents of the interventions. One study used an objective measure to assess the amount of physical activity (activity monitor), and six studies used multiple subjective measures of physical activity. Furthermore, half of the studies employed measures of physical fitness other than physical activity. In three studies, an Internet-based physical activity intervention was compared with a waiting list group. Of these three studies, two reported a significantly greater improvement in physical activity levels in the Internet-based intervention than in the control group. Seven studies compared two types of Internet-based physical activity interventions in which the main difference was either the intensity of contact between the participants and supervisors (4 studies) or the type of treatment procedures applied (3 studies). In one of these studies, a significant effect in favor of an intervention with more supervisor contact was seen. CONCLUSIONS: There is indicative evidence that Internet-based physical activity interventions are more effective than a waiting list strategy. The added value of specific components of Internet-based physical activity interventions such as increased supervisor contact, tailored information, or theoretical fidelity remains to be established. Methodological quality as well as the type of physical activity outcome measure varied, stressing the need for standardization of these measures. [Abstract/Link to Full Text]

Kongsved SM, Basnov M, Holm-Christensen K, Hjollund NH
Response rate and completeness of questionnaires: a randomized study of Internet versus paper-and-pencil versions.
J Med Internet Res. 2007;9(3):e25.
BACKGROUND: Research in quality of life traditionally relies on paper-and-pencil questionnaires. Easy access to the Internet has inspired a number of studies that use the Internet to collect questionnaire data. However, Internet-based data collection may differ from traditional methods with respect to response rate and data quality as well as the validity and reliability of the involved scales. OBJECTIVE: We used a randomized design to compare a paper-and-pencil questionnaire with an Internet version of the same questionnaire with respect to differences in response rate and completeness of data. METHODS: Women referred for mammography at a Danish public hospital from September 2004 to April 2005, aged less than 67 years and without a history of breast cancer, were eligible for the study. The women received the invitation to participate along with the usual letter from the Department of Radiology. A total of 533 women were invited to participate. They were randomized to receive either a paper questionnaire, with a prepaid return envelope, or a guideline on how to fill in the Internet-based version online. The questionnaire consisted of 17 pages with a total of 119 items, including the Short Form-36, Multidimensional Fatigue Inventory-20, Hospital Anxiety and Depression Scale, and questions regarding social status, education level, occupation, and access to the Internet. Nonrespondents received a postal reminder giving them the option of filling out the other version of the questionnaire. RESULTS: The response rate before the reminder was 17.9% for the Internet group compared to 73.2% for the paper-and-pencil group (risk difference 55.3%, P < .001). After the reminder, when the participant could chose between versions of the questionnaire, the total response rate for the Internet and paper-and-pencil group was 64.2% and 76.5%, respectively (risk difference 12.2%, P = .002). For the Internet version, 97.8% filled in a complete questionnaire without missing data, while 63.4% filled in a complete questionnaire for the paper-and-pencil version (risk difference 34.5%, P < .001). CONCLUSIONS: The Internet version of the questionnaire was superior with respect to completeness of data, but the response rate in this population of unselected patients was low. The general population has yet to become more familiar with the Internet before an online survey can be the first choice of researchers, although it is worthwhile considering within selected populations of patients as it saves resources and provides more complete answers. An Internet version may be combined with the traditional version of a questionnaire, and in follow-up studies of patients it may be more feasible to offer Internet versions. [Abstract/Link to Full Text]

El Emam K, Neri E, Jonker E
An evaluation of personal health information remnants in second-hand personal computer disk drives.
J Med Internet Res. 2007;9(3):e24.
BACKGROUND: The public is concerned about the privacy of their health information, especially as more of it is collected, stored, and exchanged electronically. But we do not know the extent of leakage of personal health information (PHI) from data custodians. One form of data leakage is through computer equipment that is sold, donated, lost, or stolen from health care facilities or individuals who work at these facilities. Previous studies have shown that it is possible to get sensitive personal information (PI) from second-hand disk drives. However, there have been no studies investigating the leakage of PHI in this way. OBJECTIVES: The aim of the study was to determine the extent to which PHI can be obtained from second-hand computer disk drives. METHODS: A list of Canadian vendors selling second-hand computer equipment was constructed, and we systematically went through the shuffled list and attempted to purchase used disk drives from the vendors. Sixty functional disk drives were purchased and analyzed for data remnants containing PHI using computer forensic tools. RESULTS: It was possible to recover PI from 65% (95% CI: 52%-76%) of the drives. In total, 10% (95% CI: 5%-20%) had PHI on people other than the owner(s) of the drive, and 8% (95% CI: 7%-24%) had PHI on the owner(s) of the drive. Some of the PHI included very sensitive mental health information on a large number of people. CONCLUSIONS: There is a strong need for health care data custodians to either encrypt all computers that can hold PHI on their clients or patients, including those used by employees and subcontractors in their homes, or to ensure that their computers are destroyed rather than finding a second life in the used computer market. [Abstract/Link to Full Text]

Armstrong N, Hearnshaw H, Powell J, Dale J
Stakeholder perspectives on the development of a virtual clinic for diabetes care: qualitative study.
J Med Internet Res. 2007;9(3):e23.
BACKGROUND: The development of the Internet has created new opportunities for health care provision, including its use as a tool to aid the self-management of chronic conditions. We studied stakeholder reactions to an Internet-based "virtual clinic," which would allow people with diabetes to communicate with their health care providers, find information about their condition, and share information and support with other users. OBJECTIVE: The aim of the study was to present the results of a detailed consultation with a variety of stakeholder groups in order to identify what they regard as the desirable, important, and feasible characteristics of an Internet-based intervention to aid diabetes self-management. METHODS: Three focus groups were conducted with 12 people with type 1 diabetes who used insulin pumps. Participants were recruited through a local diabetes clinic. One-on-one interviews were conducted with 5 health care professionals from the same clinic (2 doctors, 2 nurses, 1 dietitian) and with 1 representative of an insulin pump company. We gathered patient consensus via email on the important and useful features of Internet-based systems used for other chronic conditions (asthma, epilepsy, myalgic encephalopathy, mental health problems). A workshop to gather expert consensus on the use of information technology to improve the care of young people with diabetes was organized. RESULTS: Stakeholder groups identified the following important characteristics of an Internet-based virtual clinic: being grounded on personal needs rather than only providing general information; having the facility to communicate with, and learn from, peers; providing information on the latest developments and news in diabetes; being quick and easy to use. This paper discusses these characteristics in light of a review of the relevant literature. The development of a virtual clinic for diabetes that embodies these principles, and that is based on self-efficacy theory, is described. CONCLUSIONS: Involvement of stakeholders is vital early in the development of a complex intervention. Stakeholders have clear and relevant views on what a virtual clinic system should provide, and these views can be captured and synthesized with relative ease. This work has led to the design of a system that is able to meet user needs and is currently being evaluated in a pilot study. [Abstract/Link to Full Text]

Ruland CM, Bakken S, Rřislien J
Reliability and validity issues related to interactive tailored patient assessments: a case study.
J Med Internet Res. 2007;9(3):e22.
Recently there has been a proliferation of interactive tailored patient assessment (ITPA) tools. However, evidence of the reliability and validity of these instruments is often missing, which makes their value in research studies questionable. Because several of the common methods to evaluate instrument reliability and validity are not applicable to interactive tailored patient assessments, informatics researchers may benefit from some guidance on which methods of reliability and validity assessment they can appropriately use. This paper describes the main differences between interactive tailored patient assessments and assessment instruments based on psychometric, or classical test, theory; it summarizes the measurement techniques normally used to ascertain the validity and reliability of assessment instruments based on psychometric theory; it discusses which methods are appropriate for interactive tailored patient assessments and which are not; and finally, it illustrates the application of some of the feasible techniques with a case study that describes how the reliability and validity of the tailored symptom assessment instrument called Choice were evaluated. [Abstract/Link to Full Text]

Evans R, Elwyn G, Edwards A, Watson E, Austoker J, Grol R
Toward a model for field-testing patient decision-support technologies: a qualitative field-testing study.
J Med Internet Res. 2007;9(3):e21.
BACKGROUND: Field-testing is a quality assurance criterion in the development of patient decision-support technologies (PDSTs), as identified in the consensus statement of the International Patient Decision Aids Standards Collaboration. We incorporated field-testing into the development of a Web-based, prostate-specific antigen PDST called Prosdex, which was commissioned as part of the UK Prostate Cancer Risk Management Programme. OBJECTIVES: The aim of this study was to develop a model for the future field-testing of PDSTs, based on the field-testing of Prosdex. Our objectives were (1) to explore the reactions of men to evolving prototypes of Prosdex, (2) to assess the effect of these responses on the development process, and (3) to develop a model for field-testing PDSTs based on the responses and their effect on the development process. METHODS: Semistructured interviews were conducted with the men after they had viewed evolving prototypes of Prosdex in their homes. The men were grouped according to the prototype viewed. Men between 40 and 75 years of age were recruited from two family practices in different parts of Wales, United Kingdom. In the interviews, the men were asked for their views on Prosdex, both as a whole and in relation to specific sections such as the introduction and video clips. Comments and technical issues that arose during the viewings were noted and fed back to the developers in order to produce subsequent prototypes. RESULTS: A total of 27 men were interviewed, in five groups, according to the five prototypes of Prosdex that were developed. The two main themes from the interviews were the responses to the information provided in Prosdex and the responses to specific features of Prosdex. Within these themes, two of the most frequently encountered categories were detail of the information provided and balance between contrasting viewpoints. Criticisms were encountered, particularly with respect to navigation of the site. In addition, we found that participants made little use of the decision-making scale. The introduction of an interactive contents page to prototype 2 was the main change made to Prosdex as a result of the field-testing. Based on our findings, a model for the field-testing of PDSTs was developed, involving an exploratory field-testing stage between the planning stage and the development of the first prototype, and followed by the prototype field-testing stage, leading to the final PDST. CONCLUSIONS: In the field-testing of Prosdex, a Web-based prostate-specific antigen PDST, the responses of interviewed men were generally favorable. As a consequence of the responses, an interactive contents page was added to the site. We developed a model for the future field-testing of PDSTs, involving two stages: exploratory field-testing and prototype field-testing. [Abstract/Link to Full Text]

Beckjord EB, Finney Rutten LJ, Squiers L, Arora NK, Volckmann L, Moser RP, Hesse BW
Use of the internet to communicate with health care providers in the United States: estimates from the 2003 and 2005 Health Information National Trends Surveys (HINTS).
J Med Internet Res. 2007;9(3):e20.
BACKGROUND: Despite substantial evidence that the public wants access to Internet-based communication with health care providers, online patient-provider communication remains relatively uncommon, and few studies have examined sociodemographic and health-related factors associated with the use of online communication with health care providers at a population level. OBJECTIVE: The aim of the study was to use nationally representative data to report on the prevalence of and changes in use of online patient-provider communication in 2003 and 2005 and to describe sociodemographic and health-related factors associated with its use. METHODS: Data for this study are from two iterations of the Health Information National Trends Survey (HINTS 2003, HINTS 2005). In both years, respondents were asked whether they had ever used email or the Internet to communicate with a doctor or a doctor's office. Adult Internet users in 2003 (n = 3982) and 2005 (n = 3244) were included in the present study. Multivariate logistic regression analysis was conducted to identify predictors for electronic communication with health care providers. RESULTS: In 2003, 7% of Internet users had communicated online with an health care provider; this prevalence significantly increased to 10% in 2005. In multivariate analyses, Internet users with more years of education, who lived in a metro area, who reported poorer health status or who had a personal history of cancer were more likely to have used online patient-provider communication. CONCLUSIONS: Despite wide diffusion of the Internet, online patient-provider communication remains uncommon but is slowly increasing. Policy-level changes are needed to maximize the availability and effectiveness of online patient-provider communication for health care consumers and health care providers. Internet access remains a significant barrier to online patient-provider communication. [Abstract/Link to Full Text]

Willinsky J, Quint-Rapoport M
How complementary and alternative medicine practitioners use PubMed.
J Med Internet Res. 2007;9(2):e19.
BACKGROUND: PubMed is the largest bibliographic index in the life sciences. It is freely available online and is used by professionals and the public to learn more about medical research. While primarily intended to serve researchers, PubMed provides an array of tools and services that can help a wider readership in the location, comprehension, evaluation, and utilization of medical research. OBJECTIVE: This study sought to establish the potential contributions made by a range of PubMed tools and services to the use of the database by complementary and alternative medicine practitioners. METHODS: In this study, 10 chiropractors, 7 registered massage therapists, and a homeopath (N = 18), 11 with prior research training and 7 without, were taken through a 2-hour introductory session with PubMed. The 10 PubMed tools and services considered in this study can be divided into three functions: (1) information retrieval (Boolean Search, Limits, Related Articles, Author Links, MeSH), (2) information access (Publisher Link, LinkOut, Bookshelf ), and (3) information management (History, Send To, Email Alert). Participants were introduced to between six and 10 of these tools and services. The participants were asked to provide feedback on the value of each tool or service in terms of their information needs, which was ranked as positive, positive with emphasis, negative, or indifferent. RESULTS: The participants in this study expressed an interest in the three types of PubMed tools and services (information retrieval, access, and management), with less well-regarded tools including MeSH Database and Bookshelf. In terms of their comprehension of the research, the tools and services led the participants to reflect on their understanding as well as their critical reading and use of the research. There was universal support among the participants for greater access to complete articles, beyond the approximately 15% that are currently open access. The abstracts provided by PubMed were felt to be necessary in selecting literature to read but entirely inadequate for both evaluating and learning from the research. Thus, the restrictions and fees the participants faced in accessing full-text articles were points of frustration. CONCLUSIONS: The study found strong indications of PubMed's potential value in the professional development of these complementary and alternative medicine practitioners in terms of engaging with and understanding research. It provides support for the various initiatives intended to increase access, including a recommendation that the National Library of Medicine tap into the published research that is being archived by authors in institutional archives and through other websites. [Abstract/Link to Full Text]

Singh PM, Wight CA, Sercinoglu O, Wilson DC, Boytsov A, Raizada MN
Language preferences on websites and in Google searches for human health and food information.
J Med Internet Res. 2007;9(2):e18.
BACKGROUND: While it is known that the majority of pages on the World Wide Web are in English, little is known about the preferred language of users searching for health information online. OBJECTIVES: (1) To help global and domestic publishers, for example health and food agencies, to determine the need for translation of online information from English into local languages. (2) To help these agencies determine which language(s) they should select when publishing information online in target nations and for target subpopulations within nations. METHODS: To estimate the percentage of Web publishers that translate their health and food websites, we measured the frequency at which domain names retrieved by Google overlap for language translations of the same health-related search term. To quantify language choice of searchers from different countries, Google provided estimates of the rate at which its search engine was queried in six languages relative to English for the terms "avian flu," "tuberculosis," "schizophrenia," and "maize" (corn) from January 2004 to April 2006. The estimate was based on a 20% sample of all Google queries from 227 nations. RESULTS: We estimate that 80%-90% of health- and food-related institutions do not translate their websites into multiple languages, even when the information concerns pandemic disease such as avian influenza. Although Internet users are often well-educated, there was a strong preference for searching for health and food information in the local language, rather than English. For "avian flu," we found that only 1% of searches in non-English-speaking nations were in English, whereas for "tuberculosis" or "schizophrenia," about 4%-40% of searches in non-English countries employed English. A subset of searches for health information presumably originating from immigrants occurred in their native tongue, not the language of the adopted country. However, Spanish-language online searches for "avian flu," "schizophrenia," and "maize/corn" in the United States occurred at only <1% of the English search rate, although the US online Hispanic population constitutes 12% of the total US online population. Sub-Saharan Africa and Bangladesh searches for health information occurred in unexpected languages, perhaps reflecting the presence of aid workers and the global migration of Internet users, respectively. In Latin America, indigenous-language search terms were often used rather than Spanish. CONCLUSIONS: (1) Based on the strong preference for searching the Internet for health information in the local language, indigenous language, or immigrant language of origin, global and domestic health and food agencies should continue their efforts to translate their institutional websites into more languages. (2) We have provided linguistic online search pattern data to help health and food agencies better select languages for targeted website publishing. [Abstract/Link to Full Text]

Cook RF, Billings DW, Hersch RK, Back AS, Hendrickson A
A field test of a web-based workplace health promotion program to improve dietary practices, reduce stress, and increase physical activity: randomized controlled trial.
J Med Internet Res. 2007;9(2):e17.
BACKGROUND: Most work sites engage in some form of health promotion programming designed to improve worker health and reduce health care costs. Although these programs have typically been delivered through combinations of seminars and print materials, workplace health promotion programs are increasingly being delivered through the Internet. OBJECTIVE: The purpose of this research was to evaluate the effectiveness of a Web-based multimedia health promotion program for the workplace, designed to improve dietary practices, reduce stress, and increase physical activity. METHODS: Using a randomized controlled trial design with pretest-posttest comparisons within each group, 419 employees of a human resources company were randomly assigned to the Web-based condition or to a condition that provided print materials on the same topics. All subjects were assessed at pretest and posttest through an online questionnaire containing multiple measures of health behavior and attitudes. The test period was 3 months. Questionnaire data were analyzed mainly by analysis of covariance and t tests. RESULTS: Retention rates were good for both groups-85% for the Web-based group and 87% for the print group. Subjects using the Web-based program performed significantly better than the print group on Attitudes Toward a Healthful Diet (F(1,415) = 7.104, P = .008) and Dietary Stage of Change (F(1,408) = 6.487, P = .01), but there were no significant group differences on the five other dietary measures. Both groups also showed improvement from pretest to posttest on most dietary measures, as indicated by significant t tests. Within the Web-based group, dosage analyses showed significant effects of the number of times the subject accessed the program on measures of Dietary Self-Efficacy (F(2,203) = 5.270, P = .003), Attitudes Toward a Healthful Diet (F(2,204) = 2.585, P = .045), and Dietary Stage of Change (F(2,200) = 4.627, P = .005). No significant differences were found between the two groups on measures of stress or physical activity, although t tests of pretest-posttest changes indicated that both groups improved on several of these measures. The Web-based group gave significantly higher ratings to the program materials than the print group on all health topics and in their overall evaluation (F(1,410) = 9.808, P = .002). CONCLUSIONS: The Web-based program was more effective than print materials in producing improvements in the areas of diet and nutrition but was not more effective in reducing stress or increasing physical activity. The higher ratings given to the Web-based program suggest that workers preferred it to the print materials. Both groups showed numerous pretest-posttest improvements in all health topics, although such improvements might be attributable in part to a Hawthorne effect. Results suggest that a multimedia Web-based program can be a promising means of delivering health promotion material to the workforce, particularly in the area of diet and nutrition. [Abstract/Link to Full Text]

Couper MP, Peytchev A, Strecher VJ, Rothert K, Anderson J
Following up nonrespondents to an online weight management intervention: randomized trial comparing mail versus telephone.
J Med Internet Res. 2007;9(2):e16.
BACKGROUND: Attrition, or dropout, is a problem faced by many online health interventions, potentially threatening the inferential value of online randomized controlled trials. OBJECTIVE: In the context of a randomized controlled trial of an online weight management intervention, where 85% of the baseline participants were lost to follow-up at the 12-month measurement, the objective was to examine the effect of nonresponse on key outcomes and explore ways to reduce attrition in follow-up surveys. METHODS: A sample of 700 non-respondents to the 12-month online follow-up survey was randomly assigned to a mail or telephone nonresponse follow-up survey. We examined response rates in the two groups, costs of follow-up, reasons for nonresponse, and mode effects. We ran several logistic regression models, predicting response or nonresponse to the 12-month online survey as well as predicting response or nonresponse to the follow-up survey. RESULTS: We analyzed 210 follow-up respondents in the mail and 170 in the telephone group. Response rates of 59% and 55% were obtained for the telephone and mail nonresponse follow-up surveys, respectively. A total of 197 respondents (51.8%) gave reasons related to technical issues or email as a means of communication, with older people more likely to give technical reasons for non-completion; 144 (37.9%) gave reasons related to the intervention or the survey itself. Mail follow-up was substantially cheaper: We estimate that the telephone survey cost about US $34 per sampled case, compared to US $15 for the mail survey. The telephone responses were subject to possible social desirability effects, with the telephone respondents reporting significantly greater weight loss than the mail respondents. The respondents to the nonresponse follow-up did not differ significantly from the 12-month online respondents on key outcome variables. CONCLUSIONS: Mail is an effective way to reduce attrition to online surveys, while telephone follow-up might lead to overestimating the weight loss for both the treatment and control groups. Nonresponse bias does not appear to be a significant factor in the conclusions drawn from the randomized controlled trial. [Abstract/Link to Full Text]

Pagliari C
Design and evaluation in eHealth: challenges and implications for an interdisciplinary field.
J Med Internet Res. 2007;9(2):e15.
Much has been written about insufficient user involvement in the design of eHealth applications, the lack of evidence demonstrating impact, and the difficulties these bring for adoption. Part of the problem lies in the differing languages, cultures, motives, and operational constraints of producers and evaluators of eHealth systems and services. This paper reflects on the benefits of and barriers to interdisciplinary collaboration in eHealth, focusing particularly on the relationship between software developers and health services researchers. It argues that the common pattern of silo or parallel working may be ameliorated by developing mutual awareness and respect for each others' methods, epistemologies, and contextual drivers and by recognizing and harnessing potential synergies. Similarities and differences between models and techniques used in both communities are highlighted in order to illustrate the potential for integrated approaches and the strengths of unique paradigms. By sharing information about our research approaches and seeking to actively collaborate in the process of design and evaluation, the aim of achieving technologies that are truly user-informed, fit for context, high quality, and of demonstrated value is more likely to be realized. This may involve embracing new ways of working jointly that are unfamiliar to the stakeholders involved and that challenge disciplinary conventions. It also has policy implications for agencies commissioning research and development in this area. [Abstract/Link to Full Text]

Leonard KJ, Sittig DF
Improving information technology adoption and implementation through the identification of appropriate benefits: creating IMPROVE-IT.
J Med Internet Res. 2007;9(2):e9.
This paper describes the objectives of a collaborative initiative that attempts to provide the evidence that increased information technology (IT) capabilities, availability, and use lead directly to improved clinical quality, safety, and effectiveness within the inpatient hospital setting. This collaborative network has defined specific measurement indicators in an attempt to examine the existence, timing, and level of improvements in health outcomes that can be derived from IT investment. These indicators are in three areas: (1) IT costs (which includes both initial and ongoing investment), (2) IT infusion (ie, system availability, adoption, and deployment), and (3) health performance (eg, clinical efficacy, efficiency, quality, and effectiveness). Herein, we outline the theoretical framework, the methodology employed to create the metrics, and the benefits that can be obtained. [Abstract/Link to Full Text]

Koru G, El Emam K, Neisa A, Umarji M
A survey of quality assurance practices in biomedical open source software projects.
J Med Internet Res. 2007;9(2):e8.
BACKGROUND: Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. OBJECTIVES: Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. METHODS: A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. RESULTS: Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). CONCLUSIONS: Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes. [Abstract/Link to Full Text]

Stepnowsky CJ, Palau JJ, Marler MR, Gifford AL
Pilot randomized trial of the effect of wireless telemonitoring on compliance and treatment efficacy in obstructive sleep apnea.
J Med Internet Res. 2007;9(2):e14.
BACKGROUND: Obstructive sleep apnea (OSA) is a prevalent and serious medical condition characterized by repeated complete or partial obstructions of the upper airway during sleep and is prevalent in 2% to 4% of working middle-aged adults. Nasal continuous positive airway pressure (CPAP) is the gold-standard treatment for OSA. Because compliance rates with CPAP therapy are disappointingly low, effective interventions are needed to improve CPAP compliance among patients diagnosed with OSA. OBJECTIVE: The aim was to determine whether wireless telemonitoring of CPAP compliance and efficacy data, compared to usual clinical care, results in higher CPAP compliance and improved OSA outcomes. METHODS: 45 patients newly diagnosed with OSA were randomized to either telemonitored clinical care or usual clinical care and were followed for their first 2 months of treatment with CPAP therapy. CPAP therapists were not blinded to the participants' treatment group. RESULTS: 20 participants in each group received the designated intervention. Patients randomized to telemonitored clinical care used CPAP an average of 4.1 +/- 1.8 hours per night, while the usual clinical care patients averaged 2.8 +/- 2.2 hours per night (P = .07). Telemonitored patients used CPAP on 78% +/- 22% of the possible nights, while usual care patients used CPAP on 60% +/- 32% of the nights (P = .07). No statistically significant differences between the groups were found on measures of CPAP efficacy, including measures of mask leak and the Apnea-Hypopnea Index. Patients in the telemonitored group rated their likelihood to continue using CPAP significantly higher than the patients in the usual care group. Patients in both groups were highly satisfied with the care they received and rated themselves as "not concerned" that their CPAP data were being wirelessly monitored. CONCLUSIONS: Telemonitoring of CPAP compliance and efficacy data and rapid use of those data by the clinical sleep team to guide the collaborative (ie, patient and provider) management of CPAP treatment is as effective as usual care in improving compliance rates and outcomes in new CPAP users. This study was designed as a pilot-larger, well-powered studies are necessary to fully evaluate the clinical and economic efficacy of telemonitoring for this population. [Abstract/Link to Full Text]

Atkinson NL, Massett HA, Mylks C, Hanna B, Deering MJ, Hesse BW
User-centered research on breast cancer patient needs and preferences of an Internet-based clinical trial matching system.
J Med Internet Res. 2007;9(2):e13.
BACKGROUND: Internet-based clinical trial matching systems have the potential to streamline the search process for women with breast cancer seeking alternative treatments. A prototype system was developed to leverage the capabilities of a personal health record system for the purpose of identifying clinical trials. OBJECTIVE: This study examines how breast cancer patients perceive and interact with a preliminary version of an Internet-based clinical trial matching system, while taking into account the demands of diagnosis and treatment decision making. METHODS: Breast cancer patients participated in small group discussions and interacted with the prototype website in a two-phase qualitative research process. The first phase explored the experience of breast cancer patients (n = 8) with treatment decision making, initial responses to the idea of Internet-based clinical trial matching systems, and reactions to the prototype site. In the second phase, a different set of breast cancer patients (n = 7) reviewed revised website content and presentation and participated in a usability test in which they registered on the system and completed a personal health record to set up the matching process. RESULTS: Participants were initially skeptical of the prototype system because it emphasized registration, had a complicated registration process, and asked for complex medical information. Changing content and attending to usability guidelines improved the experience for women in the second phase of the research and enabled the identification of functionality and content issues, such as lack of clear information and directions on how to use the system. CONCLUSIONS: This study showed that women felt favorably about the idea of using the Internet to search for clinical trials but that such a system needed to meet their expectations for credibility and privacy and be sensitive to their situation. Developers can meet these expectations by conforming to established usability guidelines and testing improvements with breast cancer patients. Future research is needed to verify these findings and to continue to improve systems of this nature. [Abstract/Link to Full Text]

Meier A, Lyons EJ, Frydman G, Forlenza M, Rimer BK
How cancer survivors provide support on cancer-related Internet mailing lists.
J Med Internet Res. 2007;9(2):e12.
BACKGROUND: Internet mailing lists are an important and increasingly common way for cancer survivors to find information and support. Most studies of these mailing lists have investigated lists dedicated to one type of cancer, most often breast cancer. Little is known about whether the lessons learned from experiences with breast cancer lists apply to other cancers. OBJECTIVES: The aim of the study was to compare the structural characteristics of 10 Internet cancer-related mailing lists and identify the processes by which cancer survivors provide support. METHODS: We studied a systematic 9% sample of email messages sent over five months to 10 cancer mailing lists hosted by the Association of Cancer Online Resources (ACOR). Content analyses were used to compare the structural characteristics of the lists, including participation rates and members' identities as survivors or caregivers. We used thematic analyses to examine the types of support that list members provided through their message texts. RESULTS: Content analyses showed that characteristics of list members and subscriber participation rates varied across the lists. Thematic analyses revealed very little "off topic" discussion. Feedback from listowners indicated that they actively modeled appropriate communication on their lists and worked to keep discussions civil and focused. In all lists, members offered support much more frequently than they requested it; survivors were somewhat more likely than caregivers to offer rather than to ask for support. The most common topics in survivors' messages were about treatment information and how to communicate with health care providers. Although expressions of emotional support were less common than informational support, they appeared in all lists. Many messages that contained narratives of illness or treatment did not specifically ask for help but provided emotional support by reassuring listmates that they were not alone in their struggles with cancer. Survivors' explicit expressions of emotional support tended to be messages that encouraged active coping. Such messages also provided senders with opportunities to assume personally empowering "helper" roles that supported self-esteem. CONCLUSIONS: Many cancer survivors use the Internet to seek informational and emotional support. Across 10 lists for different cancers, informational support was the main communication style. Our finding of an emphasis on informational support is in contrast to most prior literature, which has focused on emotional support. We found the most common expressions of support were offers of technical information and explicit advice about how to communicate with health care providers. Topics and proportions of informational and emotional support differed across the lists. Our previous surveys of ACOR subscribers showed that they join the lists primarily to seek information; this qualitative study shows that they can and do find what they seek. They also find opportunities to play rewarding roles as support givers. [Abstract/Link to Full Text]

Glasgow RE, Nelson CC, Kearney KA, Reid R, Ritzwoller DP, Strecher VJ, Couper MP, Green B, Wildenhaus K
Reach, engagement, and retention in an Internet-based weight loss program in a multi-site randomized controlled trial.
J Med Internet Res. 2007;9(2):e11.
BACKGROUND: Research increasingly supports the conclusion that well-designed programs delivered over the Internet can produce significant weight loss compared to randomized controlled conditions. Much less is known about four important issues addressed in this study: (1) which recruitment methods produce higher eHealth participation rates, (2) which patient characteristics are related to enrollment, (3) which characteristics are related to level of user engagement in the program, and (4) which characteristics are related to continued participation in project assessments. METHODS: We recruited overweight members of three health maintenance organizations (HMOs) to participate in an entirely Internet-mediated weight loss program developed by HealthMedia, Inc. Two different recruitment methods were used: personal letters from prevention directors in each HMO, and general notices in member newsletters. The personal letters were sent to members diagnosed with diabetes or heart disease and, in one HMO, to a general membership sample in a particular geographic location. Data were collected in the context of a 2x2 randomized controlled trial, with participants assigned to receive or not receive a goal setting intervention and a nutrition education intervention in addition to the basic program. RESULTS: A total of 2311 members enrolled. Bivariate analyses on aggregate data revealed that personalized mailings produced higher enrollment rates than member newsletters and that members with diabetes or heart disease were more likely to enroll than those without these diagnoses. In addition, males, those over age 60, smokers, and those estimated to have higher medical expenses were less likely to enroll (all P < .001). Males and those in the combined intervention were less likely to engage initially, or to continue to be engaged with their Web program, than other participants. In terms of retention, multiple logistic regressions revealed that enrollees under age 60 (P < .001) and those with higher baseline self-efficacy were less likely to participate in the 12-month follow-up (P = .03), but with these exceptions, those participating were very similar to those not participating in the follow-up. CONCLUSIONS: A single personalized mailing increases enrollment in Internet-based weight loss. eHealth programs offer great potential for recruiting large numbers of participants, but they may not reach those at highest risk. Patient characteristics related to each of these important factors may be different, and more comprehensive analyses of determinants of enrollment, engagement, and retention in eHealth programs are needed. [Abstract/Link to Full Text]

Linke S, Murray E, Butler C, Wallace P
Internet-based interactive health intervention for the promotion of sensible drinking: patterns of use and potential impact on members of the general public.
J Med Internet Res. 2007;9(2):e10.
BACKGROUND: Heavy drinking is responsible for major health and social problems. Brief interventions have been shown to be effective, but there have been difficulties in reaching those who might benefit from them. Pilot studies have indicated that a Web-based intervention is likely to be acceptable to heavy drinkers and may produce some health benefits. However, there are few data on how many people might use such a program, the patterns of use, and potential benefits. OBJECTIVES: The aim was to examine the demographic characteristics of users of a free, Web-based, 6-week intervention for heavy drinkers and to describe the methods by which users identified the site, the pattern of site use and attrition, the characteristics associated with completing the program, and the self-reported impact on alcohol-related outcomes. METHODS: Cohort study. Visitors to the Web site were offered screening with the Fast Alcohol Screening Test, and those scoring above the cutoff for risky drinking were invited to register with the program. Demographic information was collected routinely at registration, and questionnaires were completed at the end of weeks 1 and 6. The outcome measures assessed dependency (Short Alcohol Dependency Data Questionnaire), harms (modified Alcohol Problems Questionnaire), and mental health (Clinical Outcomes in Routine Evaluation-Outcome Measure). RESULTS: The records of 10,000 users were analyzed. The mean age was 37.4 years, 51.1% were female, 37.5% were single, and 42.4% lived with children. The majority were White British, lived in the United Kingdom, and reported occupations from the higher socioeconomic strata. Over 70% connected to the Down Your Drink (Down Your Drink) site from another Internet-based resource, whereas only 5.8% heard about the site from a health or other professional. Much of the Web site use (40%) was outside normal working hours. Attrition from the program was high, with only 16.5% of registrants completing the whole 6 weeks. For those who completed the program, and the final outcome measures, measures of dependency, alcohol-related problems, and mental health symptoms were all reduced at week 6. CONCLUSIONS: The Web-based intervention was highly used, and those who stayed with the program showed significant reductions in self-reported indicators of dependency, alcohol-related problems, and mental health symptoms; however, this association cannot be assumed to be causal. Programs of this type may have the potential to reach large numbers of heavy drinkers who might not otherwise seek help. There are significant methodological challenges and further research is needed to fully evaluate such interventions. [Abstract/Link to Full Text]

Gustafson DH
A good death.
J Med Internet Res. 2007;9(1):e6.
The Institute of Medicine defines a good death a "one that is free from avoidable death and suffering for patients, families and caregivers in general accordance with the patients' and families' wishes." The current system creates barriers to reducing the stress and suffering that accompany a patient's end of life. Data and eHealth technology, if it were more accessible, could help patients, families, and caregivers to cope with end of life issues. [Abstract/Link to Full Text]

Keselman A, Tse T, Crowell J, Browne A, Ngo L, Zeng Q
Assessing consumer health vocabulary familiarity: an exploratory study.
J Med Internet Res. 2007;9(1):e5.
BACKGROUND: Accurate assessment of the difficulty of consumer health texts is a prerequisite for improving readability. General purpose readability formulas based primarily on word length are not well suited for the health domain, where short technical terms may be unfamiliar to consumers. To address this need, we previously developed a regression model for predicting "average familiarity" with consumer health vocabulary (CHV) terms. OBJECTIVE: The primary goal was to evaluate the ability of the CHV term familiarity model to predict (1) surface-level familiarity of health-related terms and (2) understanding of the underlying meaning (concept familiarity) among actual consumers. Secondary goals involved exploring the effect of demographic factors (eg, health literacy) on surface-level and concept-level familiarity and describing the relationship between the two levels of familiarity. METHODS: Survey instruments for assessing surface-level familiarity (45 items) and concept-level familiarity (15 items) were developed. All participants also completed a demographic survey and a standardized health literacy assessment, S-TOFHLA. RESULTS: Based on surveys completed by 52 consumers, linear regression suggests that predicted CHV term familiarity is a statistically significantly predictor (P < .001) of participants' surface-level and concept-level familiarity performance. Health literacy was a statistically significant predictor of surface-level familiarity scores (P < .001); its effect on concept-level familiarity scores warrants further investigation (P = 0.06). Educational level was not a significant predictor of either type of familiarity. Participant scores indicated that conceptualization lagged behind recognition, especially for terms predicted as "likely to be familiar" (P = .006). CONCLUSIONS: This exploratory study suggests that the CHV term familiarity model is predictive of consumer recognition and understanding of terms in the health domain. Potential uses of such a model include readability formulas tailored to the consumer health domain and tools to "translate" professional medical documents into text that is more accessible to consumers. The study also highlights the usefulness of distinguishing between surface-level term familiarity and deeper concept understanding and presents one method for assessing familiarity at each level. [Abstract/Link to Full Text]

Zeng QT, Tse T, Divita G, Keselman A, Crowell J, Browne AC, Goryachev S, Ngo L
Term identification methods for consumer health vocabulary development.
J Med Internet Res. 2007;9(1):e4.
BACKGROUND: The development of consumer health information applications such as health education websites has motivated the research on consumer health vocabulary (CHV). Term identification is a critical task in vocabulary development. Because of the heterogeneity and ambiguity of consumer expressions, term identification for CHV is more challenging than for professional health vocabularies. OBJECTIVE: For the development of a CHV, we explored several term identification methods, including collaborative human review and automated term recognition methods. METHODS: A set of criteria was established to ensure consistency in the collaborative review, which analyzed 1893 strings. Using the results from the human review, we tested two automated methods-C-value formula and a logistic regression model. RESULTS: The study identified 753 consumer terms and found the logistic regression model to be highly effective for CHV term identification (area under the receiver operating characteristic curve = 95.5%). CONCLUSIONS: The collaborative human review and logistic regression methods were effective for identifying terms for CHV development. [Abstract/Link to Full Text]

Recent Articles in BMC Bioinformatics

Liu ZP, Wu LY, Wang Y, Chen L, Zhang XS
Predicting gene ontology functions from protein's regional surface structures.
BMC Bioinformatics. 2007 Dec 11;8(1):475.
ABSTRACT: BACKGROUND: Annotation of protein functions is an important task in the post-genomic era. Most early approaches for this task exploit only the sequence or global structure information. However, protein surfaces are believed to be crucial to protein functions because they are the main interfaces to facilitate biological interactions. Recently, several databases related to structural surfaces, such as pockets and cavities, have been constructed with a comprehensive library of identified surface structures. For example, CASTp provides identification and measurements of surface accessible pockets as well as interior inaccessible cavities. RESULTS: A novel method was proposed to predict the Gene Ontology (GO) functions of proteins from the pocket similarity network, which is constructed according to the structure similarities of pockets. The statistics of the networks were presented to explore the relationship between the similar pockets and GO functions of proteins. Cross-validation experiments were conducted to evaluate the performance of the proposed method. Results and codes are available at: CONCLUSIONS: The computational results demonstrate that the proposed method based on the pocket similarity network is effective and efficient for predicting GO functions of proteins in terms of both computational complexity and prediction accuracy. The proposed method revealed strong relationship between small surface patterns (or pockets) and GO functions, which can be further used to identify active sites or functional motifs. The high quality performance of the prediction method together with the statistics also indicates that pockets play essential roles in biological interactions or the GO functions. Moreover, in addition to pockets, the proposed network framework can also be used for adopting other protein spatial surface patterns to predict the protein functions. [Abstract/Link to Full Text]

Schatz MC, Trapnell C, Delcher AL, Varshney A
High-throughput sequence alignment using Graphics Processing Units.
BMC Bioinformatics. 2007 Dec 10;8(1):474.
ABSTRACT: BACKGROUND: The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. RESULTS: This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. CONCLUSIONS: MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. [Abstract/Link to Full Text]

Lin LH, Lee HC, Li WH, Chen BS
A systematic approach to detecting transcription factors in response to environmental stresses.
BMC Bioinformatics. 2007 Dec 8;8(1):473.
ABSTRACT: BACKGROUND: Eukaryotic cells have developed mechanisms to respond to external environmental or physiological changes (stresses). In order to increase the activities of stress-protection functions in response to an environmental change, the internal cell mechanisms need to induce certain specific gene expression patterns and pathways by changing the expression levels of specific transcription factors (TFs). The conventional methods to find these specific TFs and their interactivities are slow and laborious. In this study, a novel efficient method is proposed to detect the TFs and their interactivities that regulate yeast genes that respond to any specific environment change. RESULTS: For each gene expressed in a specific environmental condition, a dynamic regulatory model is constructed in which the coefficients of the model represent the transcriptional activities and interactivities of the corresponding TFs. The proposed method requires only microarray data and information of all TFs that bind to the gene but it has superior resolution than the current methods. Our method not only can find stress-specific TFs but also can predict their regulatory strengths and interactivities. Moreover, TFs can be ranked, so that we can identify the major TFs to a stress. Similarly, it can rank the interactions between TFs and identify the major cooperative TF pairs. In addition, the cross-talks and interactivities among different stress-induced pathways are specified by the proposed scheme to gain much insight into protective mechanisms of yeast under different environmental stresses. CONCLUSIONS: In this study, we find significant stress-specific and cell cycle-controlled TFs via constructing a transcriptional dynamic model to regulate the expression profiles of genes under different environmental conditions through microarray data. We have applied this TF activity and interactivity detection method to many stress conditions, including hyper- and hypo- osmotic shock, heat shock, hydrogen peroxide and cell cycle, because the available expression time profiles for these conditions are long enough. Especially, we find significant TFs and cooperative TFs responding to environmental changes. Our method may also be applicable to other stresses if the gene expression profiles have been examined for a sufficiently long time. [Abstract/Link to Full Text]

Abstracts from the 3rd International Society for Computational Biology (ISCB) Student Council Symposium at the 15th Annual International Conference on Intelligent Systems for Molecular Biology (ISMB), Vienna, Austria, 21 July 2007.
BMC Bioinformatics. 2007;8 Suppl 8P1-8, S1-6. [Abstract/Link to Full Text]

Sridhar S, Lam F, Blelloch GE, Ravi R, Schwartz R
Direct maximum parsimony phylogeny reconstruction from genotype data.
BMC Bioinformatics. 2007 Dec 5;8(1):472.
ABSTRACT: BACKGROUND: Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly are available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. RESULTS: In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. CONCLUSIONS: Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone. [Abstract/Link to Full Text]

Lee M, Jeong CS, Kim D
Predicting and improving the protein sequence alignment quality by support vector regression.
BMC Bioinformatics. 2007 Dec 3;8(1):471.
ABSTRACT: BACKGROUND: For successful protein structure prediction by comparative modeling, in addition to identifying a good template protein with known structure, obtaining an accurate sequence alignment between a query protein and a template protein is critical. It has been known that the alignment accuracy can vary significantly depending on our choice of various alignment parameters such as gap opening penalty and gap extension penalty. Because the accuracy of sequence alignment is typically measured by comparing it with its corresponding structure alignment, there is no good way of evaluating alignment accuracy without knowing the structure of a query protein, which is obviously not available at the time of structure prediction. Moreover, there is no universal alignment parameter option that would always yield the optimal alignment. RESULTS: In this work, we develop a method to predict the quality of the alignment between a query and a template. We train the support vector regression (SVR) models to predict the MaxSub scores as a measure of alignment quality. The alignment between a query protein and a template of length n is transformed into a (n+1)-dimensional feature vector, then it is used as an input to predict the alignment quality by the trained SVR model. Performance of our work is evaluated by various measures including Pearson correlation coefficient between the observed and predicted MaxSub scores. Result shows high correlation coefficient of 0.945. For a pair of query and template, 48 alignments are generated by changing alignment options. Trained SVR models are then applied to predict the MaxSub scores of those and to select the best alignment option which is chosen specifically to the query-template pair. This adaptive selection procedure results in 7.4% improvement of MaxSub scores, compared to those when the single best parameter option is used for all query-template pairs. CONCLUSIONS: The present work demonstrates that the alignment quality can be predicted with reasonable accuracy. Our method is useful not only for selecting the optimal alignment parameters for a chosen template based on predicted alignment quality, but also for filtering out problematic templates that are not suitable for structure prediction due to poor alignment accuracy. This is implemented as a part in FORECAST, the server for fold-recognition and is freely available on the web at [Abstract/Link to Full Text]

Ming D, Wall ME, Sanbonmatsu KY
Domain motions in Argonaute, the catalytic engine of RNA interference.
BMC Bioinformatics. 2007 Nov 30;8(1):470.
ABSTRACT: BACKGROUND: The Argonaute protein is the core component of the RNA-induced silencing complex, playing the central role of cleaving the mRNA target. Visual inspection of static crystal structures already has enabled researchers to suggest conformational changes of Argonaute that might occur during RNA interference. We have taken the next step by performing an all-atom normal mode analysis of the Pyrococcus furiosus and Aquifex aeolicus Argonaute crystal structures, allowing us to quantitatively assess the feasibility of these conformational changes. To perform the analysis, we begin with the energy-minimized X-ray structures. Normal modes are then calculated using an all-atom molecular mechanics force field. RESULTS: The analysis reveals low-frequency vibrations that facilitate the accommodation of RNA duplexes a an essential step in target recognition. The Pyrococcus furiosus and Aquifex aeolicus Argonaute proteins both exhibit low-frequency torsion and hinge motions; however, differences in the overall architecture of the proteins cause the detailed dynamics to be significantly different. CONCLUSIONS: Overall, low-frequency vibrations of Argonaute are consistent with mechanisms within the current reaction cycle model for RNA interference. [Abstract/Link to Full Text]

Shao Y, Wu S, Chan CY, Klapper JR, Schneider E, Ding Y
A structural analysis of in vitro catalytic activities of hammerhead ribozymes.
BMC Bioinformatics. 2007 Nov 30;8(1):469.
ABSTRACT: BACKGROUND: Ribozymes are small catalytic RNAs that possess the dual functions of sequence-specific RNA recognition and site-specific cleavage. Trans-cleaving ribozymes can inhibit translation of genes at the messenger RNA (mRNA) level in both eukaryotic and prokaryotic systems and are thus useful tools for studies of gene function. However, identification of target sites for efficient cleavage poses a challenge. Here, we have considered a number of structural and thermodynamic parameters that can affect the efficiency of target cleavage, in an attempt to identify rules for the selection of functional ribozymes. RESULTS: We employed the Sfold program for RNA secondary structure prediction, to account for the likely population of target structures that co-exist in dynamic equilibrium for a specific mRNA molecule. We designed and prepared 15 hammerhead ribozymes to target GUC cleavage sites in the mRNA of the breast cancer resistance protein (BCRP). These ribozymes were tested, and their catalytic activities were measured in vitro. We found that target disruption energy owing to the alteration of the local target structure necessary for ribozyme binding, and the total energy change of the ribozyme-target hybridization, are two significant parameters for prediction of ribozyme activity. Importantly, target disruption energy is the major contributor to the predictability of ribozyme activity by the total energy change. Furthermore, for a target-site specific ribozyme, incorrect folding of the catalytic core, or interactions involving the two binding arms and the end sequences of the catalytic core, can have detrimental effects on ribozyme activity. CONCLUSION: The findings from this study suggest rules for structure-based rational design of trans-cleaving hammerhead ribozymes in gene knockdown studies. Tools implementing these rules are available from the Sribo module and the Srna module of the Sfold program available through Web server at [Abstract/Link to Full Text]

Pfeifer N, Leinenbach A, Huber CG, Kohlbacher O
Statistical learning of peptide retention behavior in chromatographic separations: A new kernel-based approach for computational proteomics.
BMC Bioinformatics. 2007 Nov 30;8(1):468.
ABSTRACT: BACKGROUND: High-throughput peptide and protein identification technologies have benefited tremendously from strategies based on tandem mass spectrometry (MS/MS) in combination with database searching algorithms. A major problem with existing methods lies within the significant number of false positive and false negative annotations. So far, standard algorithms for protein identification do not use the information gained from separation processes usually involved in peptide analysis, such as retention time information, which are readily available from chromatographic separation of the sample. Identification can thus be improved by comparing measured retention times to predicted retention times. Current prediction models are derived from a set of measured test analytes but they usually require large amounts of training data. RESULTS: We introduce a new kernel function which can be applied in combination with support vector machines to a wide range of computational proteomics problems. We show the performance of this new approach by applying it to the prediction of peptide adsorption/elution behavior in strong anion-exchange solid-phase extraction (SAX-SPE) and ion-pair reversed-phase high-performance liquid chromatography (IP-RP-HPLC). Furthermore, the predicted retention times are used to improve spectrum identifications by a p-value-based filtering approach. The approach was tested on a number of different datasets and shows excellent performance while requiring only very small training sets (about 40 peptides instead of thousands). Using the retention time predictor in our retention time filter improves the fraction of correctly identified peptide mass spectra significantly. CONCLUSIONS: The proposed kernel function is well-suited for the prediction of chromatographic separation in computational proteomics and requires only a limited amount of training data. The performance of this new method is demonstrated by applying it to peptide retention time prediction in IP-RP-HPLC and prediction of peptide sample fractionation in SAX-SPE. Finally, we incorporate the predicted chromatographic behavior in a p-value based filter to improve peptide identifications based on liquid chromatography-tandem mass spectrometry. [Abstract/Link to Full Text]

Thorne T, Stumpf MP
Generating confidence intervals on biological networks.
BMC Bioinformatics. 2007 Nov 30;8(1):467.
ABSTRACT: BACKGROUND: In the analysis of networks we frequently require the statistical significance of some network statistic, such as measures of similarity for the properties of interacting nodes. The structure of the network may introduce dependencies among the nodes and it is necessary to account for these dependencies in the statistical analysis. To this end we require some form of Null model of the network: generally rewired replicates of the network are generated which preserve only the degree (number of interactions) of each node. We show that this fails to capture important features of network structure, and may result in unrealistic significance levels, especially when additional information is available. METHODS: We present a new network resampling Null model which takes into account the degree sequence as well as biological annotation. Using gene ontology information as an illustration we show how this information is accounted for in the resampling approach, and the impact such information has on the assessment of statistical significance of correlations and motif-abundances in the Saccharomyces cerevisiae protein interaction network. An algorithm, GOcardShuffle, is introduced to allow for the efficient construction of an improved Null model for network data. RESULTS: We use the protein interaction network of S. cerevisiae; correlations between the evolutionary rates and expression levels of interacting proteins and their statistical significance were assessed for Null models which condition on different aspects of the available data. The novel GOcardShuffle approach results in a Null model for annotated network data which appears better to describe the properties of real biological networks. CONCLUSIONS: An improved statistical approach for the statistical analysis of biological network data, which conditions on the available biological information, leads to qualitatively different results compared to approaches which ignore such annotations. In particular we demonstrate the effects of the biological organization of the network suffices to explain the observed similarity of interacting proteins. [Abstract/Link to Full Text]

Winters-Hilt S
The alpha-hemolysin nanopore transduction detector - single-molecule binding studies and immunological screening of antibodies and aptamers.
BMC Bioinformatics. 2007;8 Suppl 7S9.
BACKGROUND: Nanopore detection is based on observations of the ionic current threading a single, highly stable, nanometer-scale channel. The dimensions are such that small biomolecules and biopolymers (like DNA and peptides) can translocate or be captured in the channel. The identities of translocating or captured molecules can often be discerned, one from another, based on their channel blockade "signatures". There is a self-limiting aspect to a translocation-based detection mechanism: as the channel fits tighter around the translocating molecule the dynamic range of the ionic current signal is reduced. In this study, a lengthy, highly structure, high dynamic-range, molecular capture is sought as a key component of a transduction-based nanopore detection platform. RESULTS: A specialized role, or device augmentation, involving bifunctional molecules has been explored. The bifunctional molecule has one function to enter and blockade the channel in an information-rich self-modulating manner, while the other function is for binding (usually), located on a non-channel-captured portion of the molecule. Part of the bifunctional molecule is, thus, external to the channel and is free to bind or rigidly link to a larger molecule of interest. What results is an event transduction detector: molecular events are directly transduced into discernible changes in the stationary statistics of the bifunctional molecule's channel blockade. Several results are presented of nanopore-based event-transduction detection. CONCLUSION: It may be possible to directly track the bound versus unbound state of a huge variety of molecules using nanopore transduction detection. [Abstract/Link to Full Text]

Ding Y, Dang X, Peng H, Wilkins D
Robust clustering in high dimensional data using statistical depths.
BMC Bioinformatics. 2007;8 Suppl 7S8.
BACKGROUND: Mean-based clustering algorithms such as bisecting k-means generally lack robustness. Although componentwise median is a more robust alternative, it can be a poor center representative for high dimensional data. We need a new algorithm that is robust and works well in high dimensional data sets e.g. gene expression data. RESULTS: Here we propose a new robust divisive clustering algorithm, the bisecting k-spatialMedian, based on the statistical spatial depth. A new subcluster selection rule, Relative Average Depth, is also introduced. We demonstrate that the proposed clustering algorithm outperforms the componentwise-median-based bisecting k-median algorithm for high dimension and low sample size (HDLSS) data via applications of the algorithms on two real HDLSS gene expression data sets. When further applied on noisy real data sets, the proposed algorithm compares favorably in terms of robustness with the componentwise-median-based bisecting k-median algorithm. CONCLUSION: Statistical data depths provide an alternative way to find the "center" of multivariate data sets and are useful and robust for clustering. [Abstract/Link to Full Text]

Pirooznia M, Gong P, Guan X, Inouye LS, Yang K, Perkins EJ, Deng Y
Cloning, analysis and functional annotation of expressed sequence tags from the Earthworm Eisenia fetida.
BMC Bioinformatics. 2007;8 Suppl 7S7.
BACKGROUND: Eisenia fetida, commonly known as red wiggler or compost worm, belongs to the Lumbricidae family of the Annelida phylum. Little is known about its genome sequence although it has been extensively used as a test organism in terrestrial ecotoxicology. In order to understand its gene expression response to environmental contaminants, we cloned 4032 cDNAs or expressed sequence tags (ESTs) from two E. fetida libraries enriched with genes responsive to ten ordnance related compounds using suppressive subtractive hybridization-PCR. RESULTS: A total of 3144 good quality ESTs (GenBank dbEST accession number EH669363-EH672369 and EL515444-EL515580) were obtained from the raw clone sequences after cleaning. Clustering analysis yielded 2231 unique sequences including 448 contigs (from 1361 ESTs) and 1783 singletons. Comparative genomic analysis showed that 743 or 33% of the unique sequences shared high similarity with existing genes in the GenBank nr database. Provisional function annotation assigned 830 Gene Ontology terms to 517 unique sequences based on their homology with the annotated genomes of four model organisms Drosophila melanogaster, Mus musculus, Saccharomyces cerevisiae, and Caenorhabditis elegans. Seven percent of the unique sequences were further mapped to 99 Kyoto Encyclopedia of Genes and Genomes pathways based on their matching Enzyme Commission numbers. All the information is stored and retrievable at a highly performed, web-based and user-friendly relational database called EST model database or ESTMD version 2. CONCLUSION: The ESTMD containing the sequence and annotation information of 4032 E. fetida ESTs is publicly accessible at [Abstract/Link to Full Text]

Yuan JS, Burris J, Stewart NR, Mentewab A, Stewart CN
Statistical tools for transgene copy number estimation based on real-time PCR.
BMC Bioinformatics. 2007;8 Suppl 7S6.
BACKGROUND: As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. RESULTS: Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. CONCLUSION: These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification. [Abstract/Link to Full Text]

Nagarajan V, Elasri MO
Structure and function predictions of the Msa protein in Staphylococcus aureus.
BMC Bioinformatics. 2007;8 Suppl 7S5.
BACKGROUND: Staphylococcus aureus is a human pathogen that causes a wide variety of life-threatening infections using a large number of virulence factors. One of the major global regulators used by S. aureus is the staphylococcal accessory regulator (sarA). We have identified and characterized a new gene (modulator of sarA: msa) that modulates the expression of sarA. Genetic and functional analysis shows that msa has a global effect on gene expression in S. aureus. However, the mechanism of Msa function is still unknown. Function predictions of Msa are complicated by the fact that it does not have a homologous partner in any other organism. This work aims at predicting the structure and function of the Msa protein. RESULTS: Preliminary sequence analysis showed that Msa is a putative membrane protein. It would therefore be very difficult to purify and crystallize Msa in order to acquire structure information about this protein. We have used several computational tools to predict the physico-chemical properties, secondary structural features, topology, 3D tertiary structure, binding sites, motifs/patterns/domains and cellular location. We have built a consensus that is derived from analysis using different algorithms to predict several structural features. We confirm that Msa is a putative membrane protein with three transmembrane regions. We also predict that Msa has phosphorylation sites and binding sites suggesting functions in signal transduction. CONCLUSION: Based on our predictions we hypothesise that Msa is a novel signal transducer that might be involved in the interaction of the S. aureus with its environment. [Abstract/Link to Full Text]

Mei N, Guo L, Liu R, Fuscoe JC, Chen T
Gene expression changes induced by the tumorigenic pyrrolizidine alkaloid riddelliine in liver of Big Blue rats.
BMC Bioinformatics. 2007;8 Suppl 7S4.
BACKGROUND: Pyrrolizidine alkaloids (PAs) are probably the most common plant constituents that poison livestock, wildlife, and humans worldwide. Riddelliine is isolated from plants grown in the western United States and is a prototype of genotoxic PAs. Riddelliine was used to investigate the genotoxic effects of PAs via analysis of gene expression in the target tissue of rats in this study. Previously we observed that the mutant frequency in the liver of rats gavaged with riddelliine was 3-fold higher than that in the control group. Molecular analysis of the mutants indicated that there was a statistically significant difference between the mutational spectra from riddelliine-treated and control rats. RESULTS: Riddelliine-induced gene expression profiles in livers of Big Blue transgenic rats were determined. The female rats were gavaged with riddelliine at a dose of 1 mg/kg body weight 5 days a week for 12 weeks. Rat whole genome microarray was used to perform genome-wide gene expression studies. When a cutoff value of a two-fold change and a P-value less than 0.01 were used as gene selection criteria, 919 genes were identified as differentially expressed in riddelliine-treated rats compared to the control animals. By analysis with the Ingenuity Pathway Analysis Network, we found that these significantly changed genes were mainly involved in cancer, cell death, tissue development, cellular movement, tissue morphology, cell-to-cell signaling and interaction, and cellular growth and proliferation. We further analyzed the genes involved in metabolism, injury of endothelial cells, liver abnormalities, and cancer development in detail. CONCLUSION: The alterations in gene expression were directly related to the pathological outcomes reported previously. These results provided further insight into the mechanisms involved in toxicity and carcinogenesis after exposure to riddelliine, and permitted us to investigate the interaction of gene products inside the signaling networks. [Abstract/Link to Full Text]

Schnackenberg LK, Sun J, Espandiari P, Holland RD, Hanig J, Beger RD
Metabonomics evaluations of age-related changes in the urinary compositions of male Sprague Dawley rats and effects of data normalization methods on statistical and quantitative analysis.
BMC Bioinformatics. 2007;8 Suppl 7S3.
BACKGROUND: Urine from male Sprague-Dawley rats 25, 40, and 80 days old was analyzed by NMR and UPLC/MS. The effects of data normalization procedures on principal component analysis (PCA) and quantitative analysis of NMR-based metabonomics data were investigated. Additionally, the effects of age on the metabolic profiles were examined by both NMR and UPLC/MS analyses. RESULTS: The data normalization factor was shown to have a great impact on the statistical and quantitative results indicating the need to carefully consider how to best normalize the data within a particular study and when comparing different studies. PCA applied to the data obtained from both NMR and UPLC/MS platforms reveals similar age-related differences. NMR indicated many metabolites associated with the Krebs cycle decrease while citrate and 2-oxoglutarate, also associated with the Krebs cycle, increase in older rats. CONCLUSION: This study compared four different normalization methods for the NMR-based metabonomics spectra from an age-related study. It was shown that each method of normalization has a great effect on both the statistical and quantitative analyses. Each normalization method resulted in altered relative positions of significant PCA loadings for each sample spectra but it did not alter which chemical shifts had the highest loadings. The greater the normalization factor was related to age, the greater the separation between age groups was observed in subsequent PCA analyses. The normalization factor that showed the least age dependence was total NMR intensity, which was consistent with UPLC/MS data. Normalization by total intensity attempts to make corrections due to dietary and water intake of the individual animal, which is especially useful in metabonomics evaluations of urine. Additionally, metabonomics evaluations of age-related effects showed decreased concentrations of many Krebs cycle intermediates along with increased levels of oxidized antioxidants in urine of older rats, which is consistent with current theories on aging and its association with diminishing mitochondrial function and increasing levels of reactive oxygen species. Analysis of urine by both NMR and UPLC/MS provides a comprehensive and complementary means of examining metabolic events in aging rats. [Abstract/Link to Full Text]

Loganantharaj R, Atwi M
Towards validating the hypothesis of phylogenetic profiling.
BMC Bioinformatics. 2007;8 Suppl 7S25.
BACKGROUND: As the number of fully sequenced genome increases, the need is greater for bioinformatics to predict or annotate genes of a newly sequenced genome. Ever since Eisenberg and his colleagues introduced phylogenetic profiling for assigning or predicting protein functions using comparative genomic analysis, the approach has been used in predicting function of some prokaryotic genomes quite successfully. Very little work has been reported in functional prediction of eukaryotes such as mouse and Homo sapiens species from phylogenetic profiles. RESULTS: We have proposed a general methodology for validating the hypothesis underlying phylogenetic profiling techniques, and have demonstrated it using eukaryotic target genomes such as Homo sapiens and mouse. The gene ontology is used as the gold standard for validating functional similarity among the genes in each cluster.We compute the functional cohesiveness of each cluster and the results appeared to be not encouraging towards finding functionally cohesive phylogenetic profiles. This result complements one recent work on the poor performance on functional linkage in some eukaryotic genome using phylogenetic profiling techniques. If we introduce a broad interpretation for functionally related genes as functional sub-clustering within a phylogenetic profile, then we have a very strong support for the hypothesis as we have shown in the paper. [Abstract/Link to Full Text]

Bridges SM, Magee GB, Wang N, Williams WP, Burgess SC, Nanduri B
ProtQuant: a tool for the label-free quantification of MudPIT proteomics data.
BMC Bioinformatics. 2007;8 Suppl 7S24.
BACKGROUND: Effective and economical methods for quantitative analysis of high throughput mass spectrometry data are essential to meet the goals of directly identifying, characterizing, and quantifying proteins from a particular cell state. Multidimensional Protein Identification Technology (MudPIT) is a common approach used in protein identification. Two types of methods are used to detect differential protein expression in MudPIT experiments: those involving stable isotope labelling and the so-called label-free methods. Label-free methods are based on the relationship between protein abundance and sampling statistics such as peptide count, spectral count, probabilistic peptide identification scores, and sum of peptide Sequest XCorr scores (SigmaXCorr). Although a number of label-free methods for protein quantification have been described in the literature, there are few publicly available tools that implement these methods. We describe ProtQuant, a Java-based tool for label-free protein quantification that uses the previously published SigmaXCorr method for quantification and includes an improved method for handling missing data. RESULTS: ProtQuant was designed for ease of use and portability for the bench scientist. It implements the SigmaXCorr method for label free protein quantification from MudPIT datasets. ProtQuant has a graphical user interface, accepts multiple file formats, is not limited by the size of the input files, and can process any number of replicates and any number of treatments. In addition, ProtQuant implements a new method for dealing with missing values for peptide scores used for quantification. The new algorithm, called SigmaXCorr*, uses "below threshold" peptide scores to provide meaningful non-zero values for missing data points. We demonstrate that SigmaXCorr* produces an average reduction in false positive identifications of differential expression of 25% compared to SigmaXCorr. CONCLUSION: ProtQuant is a tool for protein quantification built for multi-platform use with an intuitive user interface. ProtQuant efficiently and uniquely performs label-free quantification of protein datasets produced with Sequest and provides the user with facilities for data management and analysis. Importantly, ProtQuant is available as a self-installing executable for the Windows environment used by many bench scientists. [Abstract/Link to Full Text]

Sanders WS, Bridges SM, McCarthy FM, Nanduri B, Burgess SC
Prediction of peptides observable by mass spectrometry applied at the experimental set level.
BMC Bioinformatics. 2007;8 Suppl 7S23.
BACKGROUND: When proteins are subjected to proteolytic digestion and analyzed by mass spectrometry using a method such as 2D LC MS/MS, only a portion of the proteotypic peptides associated with each protein will be observed. The ability to predict which peptides can and cannot potentially be observed for a particular experimental dataset has several important applications in proteomics research including calculation of peptide coverage in terms of potentially detectable peptides, systems biology analysis of data sets, and protein quantification. RESULTS: We have developed a methodology for constructing artificial neural networks that can be used to predict which peptides are potentially observable for a given set of experimental, instrumental, and analytical conditions for 2D LC MS/MS (a.k.a Multidimensional Protein Identification Technology [MudPIT]) datasets. Neural network classifiers constructed using this procedure for two MudPIT datasets exhibit 10-fold cross validation accuracy of about 80%. We show that a classifier constructed for one dataset has poor predictive performance with the other dataset, thus demonstrating the need for dataset specific classifiers. Classification results with each dataset are used to compute informative percent amino acid coverage statistics for each protein in terms of the predicted detectable peptides in addition to the percent coverage of the complete sequence. We also demonstrate the utility of predicted peptide observability for systems analysis to help determine if proteins that were expected but not observed generate sufficient peptides for detection. CONCLUSION: Classifiers that accurately predict the likelihood of detecting proteotypic peptides by mass spectrometry provide proteomics researchers with powerful new approaches for data analysis. We demonstrate that the procedure we have developed for building a classifier based on an individual experimental data set results in classifiers with accuracy comparable to those reported in the literature based on large training sets collected from multiple experiments. Our approach allows the researcher to construct a classifier that is specific for the experimental, instrument, and analytical conditions of a single experiment and amenable to local, condition-specific, implementation. The resulting classifiers have application in a number of areas such as determination of peptide coverage for protein identification, pathway analysis, and protein quantification. [Abstract/Link to Full Text]

Guo L, Mei N, Dial S, Fuscoe J, Chen T
Comparison of gene expression profiles altered by comfrey and riddelliine in rat liver.
BMC Bioinformatics. 2007;8 Suppl 7S22.
BACKGROUND: Comfrey (Symphytum officinale) is a perennial plant and has been consumed by humans as a vegetable, a tea and an herbal medicine for more than 2000 years. It, however, is hepatotoxic and carcinogenic in experimental animals and hepatotoxic in humans. Pyrrolizidine alkaloids (PAs) exist in many plants and many of them cause liver toxicity and/or cancer in humans and experimental animals. In our previous study, we found that the mutagenicity of comfrey was associated with the PAs contained in the plant. Therefore, we suggest that carcinogenicity of comfrey result from those PAs. To confirm our hypothesis, we compared the expression of genes and processes of biological functions that were altered by comfrey (mixture of the plant with PAs) and riddelliine (a prototype of carcinogenic PA) in rat liver for carcinogenesis in this study. RESULTS: Groups of 6 Big Blue Fisher 344 rats were treated with riddelliine at 1 mg/kg body weight by gavage five times a week for 12 weeks or fed a diet containing 8% comfrey root for 12 weeks. Animals were sacrificed one day after the last treatment and the livers were isolated for gene expression analysis. The gene expressions were investigated using Applied Biosystems Rat Whole Genome Survey Microarrays and the biological functions were analyzed with Ingenuity Analysis Pathway software. Although there were large differences between the significant genes and between the biological processes that were altered by comfrey and riddelliine, there were a number of common genes and function processes that were related to carcinogenesis. There was a strong correlation between the two treatments for fold-change alterations in expression of drug metabolizing and cancer-related genes. CONCLUSION: Our results suggest that the carcinogenesis-related gene expression patterns resulting from the treatments of comfrey and riddelliine are very similar, and PAs contained in comfrey are the main active components responsible for carcinogenicity of the plant. [Abstract/Link to Full Text]

Das MK, Dai HK
A survey of DNA motif finding algorithms.
BMC Bioinformatics. 2007;8 Suppl 7S21.
BACKGROUND: Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. RESULTS: Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. CONCLUSION: Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of the biology of regulatory mechanism does not always provide adequate evaluation of underlying algorithms over motif models. [Abstract/Link to Full Text]

Winters-Hilt S, Morales E, Amin I, Stoyanov A
Nanopore-based kinetics analysis of individual antibody-channel and antibody-antigen interactions.
BMC Bioinformatics. 2007;8 Suppl 7S20.
BACKGROUND: The UNO/RIC Nanopore Detector provides a new way to study the binding and conformational changes of individual antibodies. Many critical questions regarding antibody function are still unresolved, questions that can be approached in a new way with the nanopore detector. RESULTS: We present evidence that different forms of channel blockade can be associated with the same antibody, we associate these different blockades with different orientations of "capture" of an antibody in the detector's nanometer-scale channel. We directly detect the presence of antibodies via reductions in channel current. Changes to blockade patterns upon addition of antigen suggest indirect detection of antibody/antigen binding. Similarly, DNA-hairpin anchored antibodies have been studied, where the DNA linkage is to the carboxy-terminus at the base of the antibody's Fc region, with significantly fewer types of (lengthy) capture blockades than was observed for free (un-bound) IgG antibody. The introduction of chaotropic agents and its effects on protein-protein interactions have also been observed. CONCLUSION: Nanopore-based approaches may eventually provide a direct analysis of the complex conformational "negotiations" that occur upon binding between proteins. [Abstract/Link to Full Text]

Dozmorov MG, Kyker KD, Saban R, Shankar N, Baghdayan AS, Centola MB, Hurst RE
Systems biology approach for mapping the response of human urothelial cells to infection by Enterococcus faecalis.
BMC Bioinformatics. 2007;8 Suppl 7S2.
BACKGROUND: To better understand the response of urinary epithelial (urothelial) cells to Enterococcus faecalis, a uropathogen that exhibits resistance to multiple antibiotics, a genome-wide scan of gene expression was obtained as a time series from urothelial cells growing as a layered 3-dimensional culture similar to normal urothelium. We herein describe a novel means of analysis that is based on deconvolution of gene variability into technical and biological components. RESULTS: Analysis of the expression of 21,521 genes from 30 minutes to 10 hours post infection, showed 9553 genes were expressed 3 standard deviations (SD) above the system zero-point noise in at least 1 time point. The asymmetric distribution of relative variances of the expressed genes was deconvoluted into technical variation (with a 6.5% relative SD) and biological variation components (>3 SD above the mode technical variability). These 1409 hypervariable (HV) genes encapsulated the effect of infection on gene expression. Pathway analysis of the HV genes revealed an orchestrated response to infection in which early events included initiation of immune response, cytoskeletal rearrangement and cell signaling followed at the end by apoptosis and shutting down cell metabolism. The number of poorly annotated genes in the earliest time points suggests heretofore unknown processes likely also are involved. CONCLUSION: Enterococcus infection produced an orchestrated response by the host cells involving several pathways and transcription factors that potentially drive these pathways. The early time points potentially identify novel targets for enhancing the host response. These approaches combine rigorous statistical principles with a biological context and are readily applied by biologists. [Abstract/Link to Full Text]

Winters-Hilt S, Baribault C
A novel, fast, HMM-with-Duration implementation - for application with a new, pattern recognition informed, nanopore detector.
BMC Bioinformatics. 2007;8 Suppl 7S19.
BACKGROUND: Hidden Markov Models (HMMs) provide an excellent means for structure identification and feature extraction on stochastic sequential data. An HMM-with-Duration (HMMwD) is an HMM that can also exactly model the hidden-label length (recurrence) distributions - while the regular HMM will impose a best-fit geometric distribution in its modeling/representation. RESULTS: A Novel, Fast, HMM-with-Duration (HMMwD) Implementation is presented, and experimental results are shown that demonstrate its performance on two-state synthetic data designed to model Nanopore Detector Data. The HMMwD experimental results are compared to (i) the ideal model and to (ii) the conventional HMM. Its accuracy is clearly an improvement over the standard HMM, and matches that of the ideal solution in many cases where the standard HMM does not. Computationally, the new HMMwD has all the speed advantages of the conventional (simpler) HMM implementation. In preliminary work shown here, HMM feature extraction is then used to establish the first pattern recognition-informed (PRI) sampling control of a Nanopore Detector Device (on a "live" data-stream). CONCLUSION: The improved accuracy of the new HMMwD implementation, at the same order of computational cost as the standard HMM, is an important augmentation for applications in gene structure identification and channel current analysis, especially PRI sampling control, for example, where speed is essential. The PRI experiment was designed to inherit the high accuracy of the well characterized and distinctive blockades of the DNA hairpin molecules used as controls (or blockade "test-probes"). For this test set, the accuracy inherited is 99.9%. [Abstract/Link to Full Text]

Winters-Hilt S, Merat S
SVM clustering.
BMC Bioinformatics. 2007;8 Suppl 7S18.
BACKGROUND: Support Vector Machines (SVMs) provide a powerful method for classification (supervised learning). Use of SVMs for clustering (unsupervised learning) is now being considered in a number of different ways. RESULTS: An SVM-based clustering algorithm is introduced that clusters data with no a priori knowledge of input classes. The algorithm initializes by first running a binary SVM classifier against a data set with each vector in the set randomly labelled, this is repeated until an initial convergence occurs. Once this initialization step is complete, the SVM confidence parameters for classification on each of the training instances can be accessed. The lowest confidence data (e.g., the worst of the mislabelled data) then has its' labels switched to the other class label. The SVM is then re-run on the data set (with partly re-labelled data) and is guaranteed to converge in this situation since it converged previously, and now it has fewer data points to carry with mislabelling penalties. This approach appears to limit exposure to the local minima traps that can occur with other approaches. Thus, the algorithm then improves on its weakly convergent result by SVM re-training after each re-labeling on the worst of the misclassified vectors - i.e., those feature vectors with confidence factor values beyond some threshold. The repetition of the above process improves the accuracy, here a measure of separability, until there are no misclassifications. Variations on this type of clustering approach are shown. CONCLUSION: Non-parametric SVM-based clustering methods may allow for much improved performance over parametric approaches, particularly if they can be designed to inherit the strengths of their supervised SVM counterparts. [Abstract/Link to Full Text]

Mete M, Xu X, Fan CY, Shafirstein G
Automatic delineation of malignancy in histopathological head and neck slides.
BMC Bioinformatics. 2007;8 Suppl 7S17.
BACKGROUND: Histopathology, which is one of the most important routines of all laboratory procedures used in pathology, is decisive for the diagnosis of cancer. Experienced histopathologists review the histological slides acquired from biopsy specimen in order to outline malignant areas. Recently, improvements in imaging technologies in terms of histological image analysis led to the discovery of virtual histological slides. In this technique, a computerized microscope scans a glass slide and generates virtual slides at a resolution of 0.25 mum/pixel. As the recognition of intrinsic cancer areas is time consuming and error prone, in this study we develop a novel method to tackle automatic squamous cell carcinoma of the head and neck detection problem in high-resolution, wholly-scanned histopathological slides. RESULTS: A density-based clustering algorithm improved for this study plays a key role in the determination of the corrupted cell nuclei. Using the Support Vector Machines (SVMs) Classifier, experimental results on seven head and neck slides show that the proposed algorithm performs well, obtaining an average of 96% classification accuracy. CONCLUSION: Recent advances in imaging technology enable us to investigate cancer tissue at cellular level. In this study we focus on wholly-scanned histopathological slides of head and neck tissues. In the context of computer-aided diagnosis, delineation of malignant regions is achieved using a powerful classification algorithm, which heavily depends on the features extracted by aid of a newly proposed cell nuclei clustering technique. The preliminary experimental results demonstrate a high accuracy of the proposed method. [Abstract/Link to Full Text]

Gusev Y, Schmittgen TD, Lerner M, Postier R, Brackett D
Computational analysis of biological functions and pathways collectively targeted by co-expressed microRNAs in cancer.
BMC Bioinformatics. 2007;8 Suppl 7S16.
BACKGROUND: Multiple recent studies have found aberrant expression profiles of microRNAome in human cancers. While several target genes have been experimentally identified for some microRNAs in various tumors, the global pattern of cellular functions and pathways affected by co-expressed microRNAs in cancer remains elusive. The goal of this study was to develop a computational approach to global analysis of the major biological processes and signaling pathways that are most likely to be affected collectively by co-expressed microRNAs in cancer cells. RESULTS: We report results of computational analysis of five datasets of aberrantly expressed microRNAs in five human cancers published by the authors (pancreatic cancer) and others (breast cancer, colon cancer, lung cancer and lymphoma). Using the combinatorial target prediction algorithm miRgate and a two-step data reduction procedure we have determined Gene Ontology categories as well as biological functions, disease categories, toxicological categories and signaling pathways that are: targeted by multiple microRNAs; statistically significantly enriched with target genes; and known to be affected in specific cancers. CONCLUSION: Our global analysis of predicted miRNA targets suggests that co-expressed miRNAs collectively provide systemic compensatory response to the abnormal phenotypic changes in cancer cells by targeting a broad range of functional categories and signaling pathways known to be affected in a particular cancer. Such systems biology based approach provides new avenues for biological interpretation of miRNA profiling data and generation of experimentally testable hypotheses regarding collective regulatory functions of miRNA in cancer. [Abstract/Link to Full Text]

Ptitsyn AA, Gimble JM
Analysis of circadian pattern reveals tissue-specific alternative transcription in leptin signaling pathway.
BMC Bioinformatics. 2007;8 Suppl 7S15.
BACKGROUND: It has been previously reported that most mammalian genes display a circadian oscillation in their baseline expression. Consequently, the phase and amplitude of each component of a signal transduction cascade has downstream consequences. RESULTS: Here, we report our analysis of alternative transcripts in the leptin signaling pathway which is responsible for the systemic regulation of macronutrient storage and energy balance. We focused on the circadian expression pattern of a critical component of the leptin signaling system, suppressor of cytokine signaling 3 (SOCS3). On an Affymetrix GeneChip 430A2 microarray, this gene is represented by three probe sets targeting different regions within the 3' end of the last exon. We demonstrate that in murine brown adipose tissue two downstream 3' probe sets experience circadian baseline oscillation in counter-phase to the upstream probe set. Such differences in expression patterns are a telltale sign of alternative splicing within the last exon of SOCS3. In contrast, all three probe sets oscillated in a common phase in murine liver and white adipose tissue. This suggests that the regulation of SOCS3 expression in brown fat is tissue specific. Another component of the signaling pathway, Janus kinase (JAK), is directly regulated by SOCS and has alternative transcript probe sets oscillating in counter-phase in a white adipose tissue specific manner. CONCLUSION: We hypothesize that differential oscillation of alternative transcripts may provide a mechanism to maintain steady levels of expression in spite of circadian baseline variation. [Abstract/Link to Full Text]

Churbanov A, Baribault C, Winters-Hilt S
Duration learning for analysis of nanopore ionic current blockades.
BMC Bioinformatics. 2007;8 Suppl 7S14.
BACKGROUND: Ionic current blockade signal processing, for use in nanopore detection, offers a promising new way to analyze single molecule properties, with potential implications for DNA sequencing. The alpha-Hemolysin transmembrane channel interacts with a translocating molecule in a nontrivial way, frequently evidenced by a complex ionic flow blockade pattern. Typically, recorded current blockade signals have several levels of blockade, with various durations, all obeying a fixed statistical profile for a given molecule. Hidden Markov Model (HMM) based duration learning experiments on artificial two-level Gaussian blockade signals helped us to identify proper modeling framework. We then apply our framework to the real multi-level DNA hairpin blockade signal. RESULTS: The identified upper level blockade state is observed with durations that are geometrically distributed (consistent with an a physical decay process for remaining in any given state). We show that mixture of convolution chains of geometrically distributed states is better for presenting multimodal long-tailed duration phenomena. Based on learned HMM profiles we are able to classify 9 base-pair DNA hairpins with accuracy up to 99.5% on signals from same-day experiments. CONCLUSION: We have demonstrated several implementations for de novo estimation of duration distribution probability density function with HMM framework and applied our model topology to the real data. The proposed design could be handy in molecular analysis based on nanopore current blockade signal. [Abstract/Link to Full Text]

Recent Articles in Biomedical Digital Libraries

Barendse W
The strike rate index: a new index for journal quality based on journal size and the h-index of citations.
Biomed Digit Libr. 2007;43.
Quantifying the impact of scientific research is almost always controversial, and there is a need for a uniform method that can be applied across all fields. Increasingly, however, the quantification has been summed up in the impact factor of the journal in which the work is published, which is known to show differences between fields. Here the h-index, a way to summarize an individual's highly cited work, was calculated for journals over a twenty year time span and compared to the size of the journal in four fields, Agriculture, Condensed Matter Physics, Genetics and Heredity and Mathematical Physics. There is a linear log-log relationship between the h-index and the size of the journal: the larger the journal, the more likely it is to have a high h-index. The four fields cannot be separated from each other suggesting that this relationship applies to all fields. A strike rate index (SRI) based on the log relationship of the h-index and the size of the journal shows a similar distribution in the four fields, with similar thresholds for quality, allowing journals across diverse fields to be compared to each other. The SRI explains more than four times the variation in citation counts compared to the impact factor. [Abstract/Link to Full Text]

Ostermann T, Zillmann H, Raak CK, Buessing A, Matthiessen PF
CAMbas--a XML-based bibliographical database on Complementary and Alternative Medicine (CAM).
Biomed Digit Libr. 2007;42.
The term "Complementary and Alternative Medicine (CAM)" covers a variety of approaches to medical theory and practice, which are not commonly accepted by representatives of conventional medicine. In the past two decades, these approaches have been studied in various areas of medicine. Although there appears to be a growing number of scientific publications on CAM, the complete spectrum of complementary therapies still requires more information about published evidence. A majority of these research publications are still not listed in electronic bibliographical databases such as MEDLINE. However, with a growing demand by patients for such therapies, physicians increasingly need an overview of scientific publications on CAM. Bearing this in mind, CAMbase, a bibliographical database on CAM was launched in order to close this gap. It can be accessed online free of charge or additional costs. The user can peruse more than 80,000 records from over 30 journals and periodicals on CAM, which are stored in CAMbase. A special search engine performing syntactical and semantical analysis of textual phrases allows the user quickly to find relevant bibliographical information on CAM. Between August 2003 and July 2006, 43,299 search queries, an average of 38 search queries per day, were registered focussing on CAM topics such as acupuncture, cancer or general safety aspects. Analysis of the requests led to the conclusion that CAMbase is not only used by scientists and researchers but also by physicians and patients who want to find out more about CAM. Closely related to this effort is our aim to establish a modern library center on Complementary Medicine which offers the complete spectrum of a modern digital library including a document delivery-service for physicians, therapists, scientists and researchers. [Abstract/Link to Full Text]

Warlick SE, Vaughan KT
Factors influencing publication choice: why faculty choose open access.
Biomed Digit Libr. 2007;41.
BACKGROUND: In an attempt to identify motivating factors involved in decisions to publish in open access and open archives (OA) journals, individual interviews with biomedical faculty members at the University of North Carolina at Chapel Hill (UNC-Chapel Hill) and Duke University, two major research universities, were conducted. The interviews focused on faculty identified as early adopters of OA/free full-text publishing. METHODS: Searches conducted in PubMed and PubMed Central identified faculty from the two institutions who have published works in OA/free full-text journals. The searches targeted authors with multiple OA citations during a specified 18 month period. Semi-structured interviews were conducted with the most prolific OA authors at each university. Individual interviews attempted to determine whether the authors were aware they published in OA journals, why they chose to publish in OA journals, what factors influenced their publishing decisions, and their general attitude towards OA publishing models. RESULTS & DISCUSSION: Fourteen interviews were granted and completed. Respondents included a fairly even mix of Assistant, Associate and Full professors. Results indicate that when targeting biomedical faculty at UNC-Chapel Hill and Duke, speed of publication and copyright retention are unlikely motivating factors or incentives for the promotion of OA publishing. In addition, author fees required by some open access journals are unlikely barriers or disincentives. CONCLUSION: It appears that publication quality is of utmost importance when choosing publication venues in general, while free access and visibility are specifically noted incentives for selection of OA journals. Therefore, free public availability and increased exposure may not be strong enough incentives for authors to choose open access over more traditional and respected subscription based publications, unless the quality issue is also addressed. [Abstract/Link to Full Text]

Ajuwon GA
Use of the Internet for health information by physicians for patient care in a teaching hospital in Ibadan, Nigeria.
Biomed Digit Libr. 2006;312.
BACKGROUND: The Internet is the world's largest network of information, communication and services. Although the Internet is widely used in medicine and has made significant impact in research, training and patient care, few studies had explored the extent to which Nigerian physicians use Internet resources for patient care. The objective of this study was to assess physicians' use of the Internet for health information for patient care. METHOD: 172 physicians at the University College hospital (UCH) Ibadan, Nigeria; completed a 31-item, anonymous, standardized questionnaire. The Epi-Info software was used for data analysis. RESULTS: The mean age of the respondents was 31.95 years (SD 4.94). Virtually all (98%) the respondents had used the Internet; 76% accessed it from cyber cafes. E-mail was the most commonly used Internet service (64%). Ninety percent of the respondents reported they had obtained information from the Internet for patient care; of this number, 76.2% had searched a database. The database most recently searched was MEDLINE/PubMed in 99% of cases. Only 7% of the respondents had ever searched the Cochrane Library. More than half (58.1%) perceived they had no confidence to download full-text articles from online sources such as the Health Internetwork Access to Research Initiative (HINARI). Multiple barriers to increased use of the Internet were identified including poor availability of broadband (fast connection speed) Internet access, lack of information searching skills, cost of access and information overload. CONCLUSION: Physicians' use of the Internet for health information for patient care was widespread but use of evidenced-based medicine resources such as Cochrane Library, Up-to-date and Clinical Evidence was minimal. Awareness and training in the use of EBM resources for patient care is needed. Introduction of EBM in the teaching curriculum will enhance the use of EBM resources by physicians for patient care. [Abstract/Link to Full Text]

Demaine J, Martin J, Wei L, de Bruijn B
LitMiner: integration of library services within a bio-informatics application.
Biomed Digit Libr. 2006;311.
BACKGROUND: This paper examines how the adoption of a subject-specific library service has changed the way in which its users interact with a digital library. The LitMiner text-analysis application was developed to enable biologists to explore gene relationships in the published literature. The application features a suite of interfaces that enable users to search PubMed as well as local databases, to view document abstracts, to filter terms, to select gene name aliases, and to visualize the co-occurrences of genes in the literature. At each of these stages, LitMiner offers the functionality of a digital library. Documents that are accessible online are identified by an icon. Users can also order documents from their institution's library collection from within the application. In so doing, LitMiner aims to integrate digital library services into the research process of its users. METHODS: Case study RESULTS: This integration of digital library services into the research process of biologists results in increased access to the published literature. CONCLUSION: In order to make better use of their collections, digital libraries should customize their services to suit the research needs of their patrons. [Abstract/Link to Full Text]

Howse DK, Bracke PJ, Keim SM
Technology mediator: a new role for the reference librarian?
Biomed Digit Libr. 2006;310.
The Arizona Health Sciences Library has collaborated with clinical faculty to develop a federated search engine that is useful for meeting real-time clinical information needs. This article proposes a technology mediation role for the reference librarian that was inspired by the project, and describes the collaborative model used for developing technology-mediated services for targeted users. [Abstract/Link to Full Text]

Schell MB
The use of free resources in a subscription-based digital library: a case study of the North Carolina AHEC Digital Library.
Biomed Digit Libr. 2006;39.
BACKGROUND: The North Carolina (NC) Area Health Education Center's (AHEC) Digital Library (ADL) is a web portal designed to meet the information needs of health professionals across the state by pulling together a set of resources from numerous different sources and linking a pool of users to only the resources for which they have eligibility. Although the ADL was designed with the primary purpose of linking health care professionals to a set of licensed resources, the ADL also contains a significant number of links to free resources. These resources are available to any ADL member logging into their ADL account and to guest visitors to the ADL. While there are regular assessments of the subscription resources in the ADL as to utility and frequency of use, up until this point there has been no systematic analysis of the use of the overall set of free resources. It was decided to undertake an examination of the usage of ADL free resources over a 6-month period to analyze the utility of these resources to both ADL members and guests. METHODS: Each time a resource is accessed through the ADL, it is logged in a table. This study used a SQL query to pull every free resource accessed between November 1, 2005 and April 30, 2006. An additional query also pulled the user information for each free resource accessed. Once the queries of the database were complete, the results were imported into an Excel spreadsheet and analyzed using basic descriptive statistics. RESULTS: The vast majority of resource use through the ADL is to licensed resources. There are 2056 free resource URLs in the ADL, to which 1351 were linked out, meaning there was at least one link out to 65% of the free resources. The single most popular free resource was PubMed with 4803 link outs or nearly 20% of the total link outs to free resources. The breakdown of free resource use by different use groups indicates that the highest percentage of use of free resources was by guests followed by institutional affiliates and AHEC Faculty/Staff. The next 3 highest user groups accessing free resources are: paid members, preceptors, and residents. CONCLUSION: The only free resource capturing a significant number of link outs is the free link to PubMed. This reflects the importance placed on traditional medical literature searching by the ADL clinical user base. Institutional affiliates access free resources through the ADL with the second highest frequency of all the user groups. Finally, in analyzing use of free resources, it is important to note the overall limitations of this survey. While link outs are excellent indicators of frequency of use they do not provide any information about the ultimate usefulness of the resource being accessed. Further studies would need to examine not only the quantitative use of resources, but also their qualitative importance to the user. [Abstract/Link to Full Text]

Carter TP, Carter AO, Broomes G
Purchasing online journal access for a hospital medical library: how to identify value in commercially available products.
Biomed Digit Libr. 2006;38.
BACKGROUND: Medical practice today requires evaluating large amounts of information which should be available at all times. This information is found most easily in a digital form. Some information has already been evaluated for validity (evidence based medicine sources) and some is in unevaluated form (paper and online journals). In order to improve access to digital information, the School of Clinical Medicine and Research at the University of the West Indies and Queen Elizabeth Hospital decided to enhance the library by offering online full text medical articles and evidence based medicine sources. The aim of this paper is to evaluate the relative value of online journal commercial products available for a small hospital and medical school library. METHODS: Three reference standards were chosen to represent the ideal list of core periodicals for a broad range of medical care: 2 Brandon/Hill selected lists of journals for the small medical library (BH and BH core) and the academic medical library core journal collection chosen for the Florida State University College of Medicine Medical Library. Six commercially available collections were compared to the reference standards and to the current paper journal subscription list as regards to number of journals matched and cost per journal matched. Ease of use and presence of secondary sources were also considered. RESULTS: The cost per journal matched ranged from US $3194 to $81. Because of their low subscription prices, the Biomedical Reference Collection and Proquest products were the most cost beneficial. However, they provided low coverage of the ideal lists (12-17% and 21-32% respectively) and contained significant embargoes on current editions, were not user friendly and contained no secondary sources. The Ovid Brandon/Hill Plus Collection overcame these difficulties but had a much higher cost-benefit range while providing higher coverage of the ideal lists (14-47%). CONCLUSION: After considering costs, benefits, ease of use, embargoes, presence of secondary sources (ACP Journal Club, DARE), the Ovid Brandon/Hill Plus Collection was the best choice for our hospital considering our budget. However, the option to individually select our own journal list from Ovid and pay per journal has a certain appeal as well. [Abstract/Link to Full Text]

Bakkalbasi N, Bauer K, Glover J, Wang L
Three options for citation tracking: Google Scholar, Scopus and Web of Science.
Biomed Digit Libr. 2006;37.
BACKGROUND: Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged--Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. METHODS: Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7-12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. RESULTS: For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. CONCLUSION: This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article. [Abstract/Link to Full Text]

Koehler BM, Roderer NK
Scholarly communications program: force for change.
Biomed Digit Libr. 2006;36.
The changing landscape of scholarly publication and increasing journal costs have resulted in a need for proactive behavior in libraries. At Johns Hopkins University in Baltimore, Maryland, a group of librarians joined forces to bring these issues to the attention of faculty and to begin a dialog leading to change. This commentary describes a comprehensive program undertaken to raise faculty awareness of scholarly communications issues. In addition to raising faculty interest in the issues at hand, the endeavor also highlights an area where library liaisons can increase their communication with the units they serve. [Abstract/Link to Full Text]

McConnaughy RP, Wilson SP
Using geographic information systems to identify prospective marketing areas for a special library.
Biomed Digit Libr. 2006;34.
BACKGROUND: The Center for Disability Resources (CDR) Library is the largest collection of its kind in the Southeastern United States, consisting of over 5,200 books, videos/DVDs, brochures, and audiotapes covering a variety of disability-related topics, from autism to transition resources. The purpose of the library is to support the information needs of families, faculty, students, staff, and other professionals in South Carolina working with individuals with disabilities. The CDR Library is funded on a yearly basis; therefore, maintaining high usage is crucial. A variety of promotional efforts have been used to attract new patrons to the library. Anyone in South Carolina can check out materials from the library, and most of the patrons use the library remotely by requesting materials, which are then mailed to them. The goal of this project was to identify areas of low geographic usage as a means of identifying locations for future library marketing efforts. METHODS: Nearly four years worth of library statistics were compiled in a spreadsheet that provided information per county on the number of checkouts, the number of renewals, and the population. Five maps were created using ArcView GIS software to create visual representations of patron checkout and renewal behavior per county. RESULTS: Out of the 46 counties in South Carolina, eight counties never checked out materials from the library. As expected urban areas and counties near the library's physical location have high usage totals. CONCLUSION: The visual representation of the data made identification of low usage regions easier than using a standalone database with no visual-spatial component. The low usage counties will be the focus of future Center for Disability Resources Library marketing efforts. Due to the impressive visual-spatial representations created with Geographic Information Systems, which more efficiently communicate information than stand-alone database information can, librarians may benefit from the software's use as a supplemental tool for tracking library usage and planning promotional efforts. [Abstract/Link to Full Text]

Ramsey EC
Multimedia Bootcamp: a health sciences library provides basic training to promote faculty technology integration.
Biomed Digit Libr. 2006;33.
BACKGROUND: Recent research has shown a backlash against the enthusiastic promotion of technological solutions as replacements for traditional educational content delivery. Many institutions, including the University of Virginia, have committed staff and resources to supporting state-of-the-art, showpiece educational technology projects. However, the Claude Moore Health Sciences Library has taken the approach of helping Health Sciences faculty be more comfortable using technology in incremental ways for instruction and research presentations. In July 2004, to raise awareness of self-service multimedia resources for instructional and professional development needs, the Library conducted a "Multimedia Bootcamp" for nine Health Sciences faculty and fellows. METHODS: Case study. RESULTS: Program stewardship by a single Library faculty member contributed to the delivery of an integrated learning experience. The amount of time required to attend the sessions and complete homework was the maximum fellows had to devote to such pursuits. The benefit of introducing technology unfamiliar to most fellows allowed program instructors to start everyone at the same baseline while not appearing to pass judgment on the technology literacy skills of faculty. The combination of wrapping the program in the trappings of a fellowship and selecting fellows who could commit to a majority of scheduled sessions yielded strong commitment from participants as evidenced by high attendance and a 100% rate of assignment completion. Response rates to follow-up evaluation requests, as well as continued use of Media Studio resources and Library expertise for projects begun or conceived during Bootcamp, bode well for the long-term success of this program. CONCLUSION: An incremental approach to integrating technology with current practices in instruction and presentation provided a supportive yet energizing environment for Health Sciences faculty. Keys to this program were its faculty focus, traditional hands-on instruction, unrestricted access to technology tools and support, and inclusion of criteria for evaluating when multimedia can augment pedagogical aims. [Abstract/Link to Full Text]

Bekhuis T
Conceptual biology, hypothesis discovery, and text mining: Swanson's legacy.
Biomed Digit Libr. 2006;32.
Innovative biomedical librarians and information specialists who want to expand their roles as expert searchers need to know about profound changes in biology and parallel trends in text mining. In recent years, conceptual biology has emerged as a complement to empirical biology. This is partly in response to the availability of massive digital resources such as the network of databases for molecular biologists at the National Center for Biotechnology Information. Developments in text mining and hypothesis discovery systems based on the early work of Swanson, a mathematician and information scientist, are coincident with the emergence of conceptual biology. Very little has been written to introduce biomedical digital librarians to these new trends. In this paper, background for data and text mining, as well as for knowledge discovery in databases (KDD) and in text (KDT) is presented, then a brief review of Swanson's ideas, followed by a discussion of recent approaches to hypothesis discovery and testing. 'Testing' in the context of text mining involves partially automated methods for finding evidence in the literature to support hypothetical relationships. Concluding remarks follow regarding (a) the limits of current strategies for evaluation of hypothesis discovery systems and (b) the role of literature-based discovery in concert with empirical research. Report of an informatics-driven literature review for biomarkers of systemic lupus erythematosus is mentioned. Swanson's vision of the hidden value in the literature of science and, by extension, in biomedical digital databases, is still remarkably generative for information scientists, biologists, and physicians. [Abstract/Link to Full Text]

Burnham JF
Scopus database: a review.
Biomed Digit Libr. 2006;31.
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. [Abstract/Link to Full Text]

Dong P, Loh M, Mondry A
The "impact factor" revisited.
Biomed Digit Libr. 2005 Dec 5;27.
The number of scientific journals has become so large that individuals, institutions and institutional libraries cannot completely store their physical content. In order to prioritize the choice of quality information sources, librarians and scientists are in need of reliable decision aids. The "impact factor" (IF) is the most commonly used assessment aid for deciding which journals should receive a scholarly submission or attention from research readership. It is also an often misunderstood tool. This narrative review explains how the IF is calculated, how bias is introduced into the calculation, which questions the IF can or cannot answer, and how different professional groups can benefit from IF use. [Abstract/Link to Full Text]

Dong P, Loh M, Mondry A
Relevance similarity: an alternative means to monitor information retrieval systems.
Biomed Digit Libr. 2005 Jul 20;26.
BACKGROUND: Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group. METHODS: The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic. RESULTS: The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set. CONCLUSION: In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data. [Abstract/Link to Full Text]

Spasser MA
Review of Doody's Core Titles in the Health Sciences 2004 (DCT 2004).
Biomed Digit Libr. 2005 Jun 29;25. [Abstract/Link to Full Text]

Bohne-Lang A, Lang E, Taube A
PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.
Biomed Digit Libr. 2005 Jun 27;24.
BACKGROUND: Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. METHODS: The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. RESULTS: PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. CONCLUSION: PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems. [Abstract/Link to Full Text]

Greenberg CJ
Good old days?
Biomed Digit Libr. 2005 Apr 13;2(1):3.
Alternative models of subsidizing scholarly publishing and dissemination have germinated and gathered momentum in the fertile soil of dissatisfaction. Like the stubborn spring dandelion that needs but a small crack in the sidewalk to flower boldly, the first flowers of Open Access in library literature, including Biomedical Digital Libraries, have sensed their opportunity to change the existing paradigm of giving away our scholarship and intellectual property, only to buy it back for the privilege of knowing it can be read. Will biomedical digital library and informatics researchers understand their role in a new era of Open Access simply by desiring an immediate uninhibited global audience and recognizing the necessity of open access peer-reviewed literature to become self-sufficient? [Abstract/Link to Full Text]

Banks MA
The excitement of Google Scholar, the worry of Google Print.
Biomed Digit Libr. 2005 Mar 22;2(1):2.
In late 2004 Google announced two major projects, the unveiling of Google Scholar and a major expansion of the Google Print digitization program. Both projects have generated discussion within the library and research communities, and Google Print has received significant media attention.This commentary describes exciting educational possibilities stimulated by Google Scholar, and argues for caution regarding the Google Print project. [Abstract/Link to Full Text]

Bohne-Lang A, Lang E
Do we need a Unique Scientist ID for publications in biomedicine?
Biomed Digit Libr. 2005 Mar 22;2(1):1.
BACKGROUND: The PubMed database contains nearly 15 million references from more than 4,800 biomedical journals. In general, authors of scientific articles are addressed by their last name and forename initial. DISCUSSION: In general, names can be too common and not unique enough to be search criteria. Today, Ph.D. students, other researchers and women publish scientific work. A person may not only have one name but several names and publish under each name. A Unique Scientist ID could help to address people in peer-to-peer (P2P) networks. As a starting point, perhaps PubMed could generate and manage such a scientist ID. SUMMARY: A Unique Scientist ID would improve knowledge management in science. Unfortunately in some of the publications, and then within the online databases, only one letter abbreviates the author's forename. A common name with only one initial could retrieve pertinent citations, but include many false drops (retrieval matching searched criteria but indisputably irrelevant). [Abstract/Link to Full Text]

Larue EM
Using GIS to establish a public library consumer health collection.
Biomed Digit Libr. 2004 11 18;1(1):3.
BACKGROUND: Learning the exact demographic characteristics of a neighborhood in which a public library serves, assists the collection development librarian in building an appropriate collection. Gathering that demographic information can be a lengthy process, and then formatting the information for the neighborhood in question becomes arduous.As society ages and the methods for health care evolve, people may take charge of their own health. With this prospectus, public libraries should consider creating a consumer health collection to assist the public in their health care needs. Using neighborhood demographic information can inform the collection development librarians as to the dominant age groups, sex, and races within the neighborhood. With this information, appropriate consumer health materials may be assembled in the public library. METHODS: In order to visualize the demographics of a neighborhood, the computer program ArcView GIS (geographic information systems) was used to create maps for specified areas. The neighborhood data was taken from the U.S. Census Department's annual census and library addresses were accumulated through a free database. After downloading the census block information from the data was manipulated with ArcView GIS and queried to produce maps displaying the requested neighborhood demographics to view in respect to libraries. RESULTS: ArcView GIS produced maps displaying public libraries and requested demographics. After viewing the maps the collection development librarian can see exactly what populations are served by the library and adjust the library's collection accordingly. CONCLUSIONS: ArcView GIS can be used to produce maps displaying the communities that libraries serve, spot boundaries, be it "man-made or natural," that exist prohibiting customer service, and assist collection development librarians in justifying their purchases for a dedicated consumer health collection or resources in general. [Abstract/Link to Full Text]

Greenberg CJ
True good.
Biomed Digit Libr. 2004 9 20;1(1):1. [Abstract/Link to Full Text]

Fuller SS, Revere D, Bugni PF, Martin GM
A knowledgebase system to enhance scientific discovery: Telemakus.
Biomed Digit Libr. 2004 9 21;1(1):2.
BACKGROUND: With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. METHODS: Case study. RESULTS: The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. CONCLUSION: Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested. The rationale and system architecture are described and plans for the future are discussed. [Abstract/Link to Full Text]

Recent Articles in Cell Biology Education

Hinchcliffe EH
Using long-term time-lapse imaging of mammalian cell cycle progression for laboratory instruction and analysis.
Cell Biol Educ. 2005;4(4):284-90. [Abstract/Link to Full Text]

Reingold ID
Organic first: a biology-friendly chemistry curriculum.
Cell Biol Educ. 2005;4(4):281-3. [Abstract/Link to Full Text]

Watters C
Video views and reviews: gastrulation and the fashioning of animal embryos.
Cell Biol Educ. 2005;4(4):273-8. [Abstract/Link to Full Text]

Labov JB
From the National Academies: ongoing challenges to evolution education: resources and activities of the National Academies.
Cell Biol Educ. 2005;4(4):269-72. [Abstract/Link to Full Text]

Allen D, Tanner K
Infusing active learning into the large-enrollment biology class: seven strategies, from the simple to complex.
Cell Biol Educ. 2005;4(4):262-8. [Abstract/Link to Full Text]

Porter JR
Information literacy in biology education: an example from an advanced cell biology course.
Cell Biol Educ. 2005;4(4):335-43.
Information literacy skills are critically important for the undergraduate biology student. The ability to find, understand, evaluate, and use information, whether from the scientific literature or from Web resources, is essential for a good understanding of a topic and for the conduct of research. A project in which students receive information literacy instruction and then proceed to select, update, and write about a current research topic in an upper-level cell biology course is described. Students research the chosen topic using paper and electronic resources, generate a list of relevant articles, prepare abstracts based on papers read, and, finally, prepare a "state-of-the-art" paper on the topic. This approach, which extends over most of one semester, has resulted in a number of well-researched and well-written papers that incorporate some of the latest research in cell biology. The steps in this project have also led to students who are prepared to address future projects on new and complex topics. The project is part of an undergraduate course in cell biology, but parts of the assignments can be modified to fit a variety of subject areas and levels. [Abstract/Link to Full Text]

Turrens JF
Teaching research integrity and bioethics to science undergraduates.
Cell Biol Educ. 2005;4(4):330-4.
Undergraduate students in the Department of Biomedical Sciences at the University of South Alabama, Mobile, are required to take a course entitled "Issues in Biomedical Sciences," designed to increase students' awareness about bioethical questions and issues concerning research integrity. This paper describes the main features of this course and summarizes the results of a survey designed to evaluate the students' perceptions about the course. A summary of this study was presented at the 2002 Conference on Research Integrity in Potomac, MD, sponsored by the Office of Research Integrity of the National Institutes of Health. [Abstract/Link to Full Text]

Kumar A
Teaching systems biology: an active-learning approach.
Cell Biol Educ. 2005;4(4):323-9.
With genomics well established in modern molecular biology, recent studies have sought to further the discipline by integrating complementary methodologies into a holistic depiction of the molecular mechanisms underpinning cell function. This genomic subdiscipline, loosely termed "systems biology," presents the biology educator with both opportunities and obstacles: The benefit of exposing students to this cutting-edge scientific methodology is manifest, yet how does one convey the breadth and advantage of systems biology while still engaging the student? Here, I describe an active-learning approach to the presentation of systems biology. In graduate classes at the University of Michigan, Ann Arbor, I divided students into small groups and asked each group to interpret a sample data set (e.g., microarray data, two-hybrid data, homology-search results) describing a hypothetical signaling pathway. Mimicking realistic experimental results, each data set revealed a portion of this pathway; however, students were only able to reconstruct the full pathway by integrating all data sets, thereby exemplifying the utility in a systems biology approach. Student response to this cooperative exercise was extremely positive. In total, this approach provides an effective introduction to systems biology appropriate for students at both the undergraduate and graduate levels. [Abstract/Link to Full Text]

Bowers N, Brandon M, Hill CD
The use of a knowledge survey as an indicator of student learning in an introductory biology course.
Cell Biol Educ. 2005;4(4):311-22.
A knowledge survey (KS) is a series of content-based questions sequenced in order of presentation during a course. Students do not answer the questions; rather, they rank their confidence in their ability to answer each question. A 304-question KS was designed and implemented for a multisection, multi-instructor introductory biology course to determine whether this tool could be used to assess student learning. The KS was administered during the first 2 wk and the last 2 wk of the semester online via WebCT. Results were scored using one point for each "not confident" response (level 1), two points for each "possibly confident" response (level 2), and three points for each "confident" response (level 3). We found that scores increased significantly between the pre- and post-KS, indicating that student confidence in their knowledge of the course material increased over the semester. However, the correlation between student confidence and final grades was negligible or low, and chi-square tests show that KS scores and matched exam questions were not significantly related. We conclude that under the conditions implemented in our study, the KS does not reliably measure student learning as measured by final grades or exam questions. [Abstract/Link to Full Text]

Knight JK, Wood WB
Teaching more by lecturing less.
Cell Biol Educ. 2005;4(4):298-310.
We carried out an experiment to determine whether student learning gains in a large, traditionally taught, upper-division lecture course in developmental biology could be increased by partially changing to a more interactive classroom format. In two successive semesters, we presented the same course syllabus using different teaching styles: in fall 2003, the traditional lecture format; and in spring 2004, decreased lecturing and addition of student participation and cooperative problem solving during class time, including frequent in-class assessment of understanding. We used performance on pretests and posttests, and on homework problems to estimate and compare student learning gains between the two semesters. Our results indicated significantly higher learning gains and better conceptual understanding in the more interactive course. To assess reproducibility of these effects, we repeated the interactive course in spring 2005 with similar results. Our findings parallel results of similar teaching-style comparisons made in other disciplines. On the basis of this evidence, we propose a general model for teaching large biology courses that incorporates interactive engagement and cooperative work in place of some lecturing, while retaining course content by demanding greater student responsibility for learning outside of class. [Abstract/Link to Full Text]

Flowers SK, Easter C, Holmes A, Cohen B, Bednarski AE, Mardis ER, Wilson RK, Elgin SC
Genome science: a video tour of the Washington University Genome Sequencing Center for high school and undergraduate students.
Cell Biol Educ. 2005;4(4):291-7.
Sequencing of the human genome has ushered in a new era of biology. The technologies developed to facilitate the sequencing of the human genome are now being applied to the sequencing of other genomes. In 2004, a partnership was formed between Washington University School of Medicine Genome Sequencing Center's Outreach Program and Washington University Department of Biology Science Outreach to create a video tour depicting the processes involved in large-scale sequencing. "Sequencing a Genome: Inside the Washington University Genome Sequencing Center" is a tour of the laboratory that follows the steps in the sequencing pipeline, interspersed with animated explanations of the scientific procedures used at the facility. Accompanying interviews with the staff illustrate different entry levels for a career in genome science. This video project serves as an example of how research and academic institutions can provide teachers and students with access and exposure to innovative technologies at the forefront of biomedical research. Initial feedback on the video from undergraduate students, high school teachers, and high school students provides suggestions for use of this video in a classroom setting to supplement present curricula. [Abstract/Link to Full Text]

Klymkowsky MW
Points of view: content versus process: is this a fair choice? Can nonmajors courses lead to biological literacy? Do majors courses do any better?
Cell Biol Educ. 2005;4(3):196-8. [Abstract/Link to Full Text]

Howard DR, Miskowski JA
Using a module-based laboratory to incorporate inquiry into a large cell biology course.
Cell Biol Educ. 2005;4(3):249-60.
Because cell biology has rapidly increased in breadth and depth, instructors are challenged not only to provide undergraduate science students with a strong, up-to-date foundation of knowledge, but also to engage them in the scientific process. To these ends, revision of the Cell Biology Lab course at the University of Wisconsin-La Crosse was undertaken to allow student involvement in experimental design, emphasize data collection and analysis, make connections to the "big picture," and increase student interest in the field. Multiweek laboratory modules were developed as a method to establish an inquiry-based learning environment. Each module utilizes relevant techniques to investigate one or more questions within the context of a fictional story, and there is a progression during the semester from more instructor-guided to more open-ended student investigation. An assessment tool was developed to evaluate student attitudes regarding their lab experience. Analysis of five semesters of data strongly supports the module format as a successful model for inquiry education by increasing student interest and improving attitude toward learning. In addition, student performance on inquiry-based assignments improved over the course of each semester, suggesting an improvement in inquiry-related skills. [Abstract/Link to Full Text]

Meir E, Perry J, Stal D, Maruca S, Klopfer E
How effective are simulated molecular-level experiments for teaching diffusion and osmosis?
Cell Biol Educ. 2005;4(3):235-48.
Diffusion and osmosis are central concepts in biology, both at the cellular and organ levels. They are presented several times throughout most introductory biology textbooks (e.g., Freeman, 2002), yet both processes are often difficult for students to understand (Odom, 1995; Zuckerman, 1994; Sanger et al., 2001; and results herein). Students have deep-rooted misconceptions about how diffusion and osmosis work, especially at the molecular level. We hypothesized that this might be in part due to the inability to see and explore these processes at the molecular level. In order to investigate this, we developed new software, OsmoBeaker, which allows students to perform inquiry-based experiments at the molecular level. Here we show that these simulated laboratories do indeed teach diffusion and osmosis and help overcome some, but not all, student misconceptions. [Abstract/Link to Full Text]

Hammamieh R, Anderson M, Carr K, Tran CN, Yourick DL, Jett M
Students investigating the antiproliferative effects of synthesized drugs on mouse mammary tumor cells.
Cell Biol Educ. 2005;4(3):221-34.
The potential for personalized cancer management has long intrigued experienced researchers as well as the naďve student intern. Personalized cancer treatments based on a tumor's genetic profile are now feasible and can reveal both the cells' susceptibility and resistance to chemotherapeutic agents. In a weeklong laboratory investigation that mirrors current cancer research, undergraduate and advanced high school students determine the efficacy of common pharmacological agents through in vitro testing. Using mouse mammary tumor cell cultures treated with "unknown" drugs historically recommended for breast cancer treatment, students are introduced to common molecular biology techniques from in vitro cell culture to fluorescence microscopy. Student understanding is assessed through laboratory reports and the successful identification of the unknown drug. The sequence of doing the experiment, applying logic, and constructing a hypothesis gives the students time to discover the rationale behind the cellular drug resistance assay. The breast cancer experiment has been field tested during the past 5 yr with more than 200 precollege/undergraduate interns through the Gains in the Education of Mathematics and Science program hosted by the Walter Reed Army Institute of Research. [Abstract/Link to Full Text]

Bednarski AE, Elgin SC, Pakrasi HB
An inquiry into protein structure and genetic disease: introducing undergraduates to bioinformatics in a large introductory course.
Cell Biol Educ. 2005;4(3):207-20.
This inquiry-based lab is designed around genetic diseases with a focus on protein structure and function. To allow students to work on their own investigatory projects, 10 projects on 10 different proteins were developed. Students are grouped in sections of 20 and work in pairs on each of the projects. To begin their investigation, students are given a cDNA sequence that translates into a human protein with a single mutation. Each case results in a genetic disease that has been studied and recorded in the Online Mendelian Inheritance in Man (OMIM) database. Students use bioinformatics tools to investigate their proteins and form a hypothesis for the effect of the mutation on protein function. They are also asked to predict the impact of the mutation on human physiology and present their findings in the form of an oral report. Over five laboratory sessions, students use tools on the National Center for Biotechnology Information (NCBI) Web site (BLAST, LocusLink, OMIM, GenBank, and PubMed) as well as ExPasy, Protein Data Bank, ClustalW, the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, and the structure-viewing program DeepView. Assessment results showed that students gained an understanding of the Web-based databases and tools and enjoyed the investigatory nature of the lab. [Abstract/Link to Full Text]

Shachak A, Ophir R, Rubin E
Applying instructional design theories to bioinformatics education in microarray analysis and primer design workshops.
Cell Biol Educ. 2005;4(3):199-206.
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of Gagne's Conditions of Learning instructional design theory. This theory, although first published in the early 1970s, is still fundamental in instructional design and instructional technology. First, top-level as well as prerequisite learning objectives for a microarray analysis workshop and a primer design workshop were defined. Then a hierarchy of objectives for each workshop was created. Hands-on tutorials were designed to meet these objectives. Finally, events of learning proposed by Gagne's theory were incorporated into the hands-on tutorials. The resultant manuals were tested on a small number of trainees, revised, and applied in 1-day bioinformatics workshops. Based on this experience and on observations made during the workshops, we conclude that Gagne's Conditions of Learning instructional design theory provides a useful framework for developing bioinformatics training, but may not be optimal as a method for teaching it. [Abstract/Link to Full Text]

Wright RL
Points of view: content versus process: is this a fair choice? Undergraduate biology courses for nonscientists: toward a lived curriculum.
Cell Biol Educ. 2005;4(3):189-96. [Abstract/Link to Full Text]

McClean P, Johnson C, Rogers R, Daniels L, Reber J, Slator BM, Terpstra J, White A
Molecular and cellular biology animations: development and impact on student learning.
Cell Biol Educ. 2005;4(2):169-79.
Educators often struggle when teaching cellular and molecular processes because typically they have only two-dimensional tools to teach something that plays out in four dimensions. Learning research has demonstrated that visualizing processes in three dimensions aids learning, and animations are effective visualization tools for novice learners and aid with long-term memory retention. The World Wide Web Instructional Committee at North Dakota State University has used these research results as an inspiration to develop a suite of high-quality animations of molecular and cellular processes. Currently, these animations represent transcription, translation, bacterial gene expression, messenger RNA (mRNA) processing, mRNA splicing, protein transport into an organelle, the electron transport chain, and the use of a biological gradient to drive adenosine triphosphate synthesis. These animations are integrated with an educational module that consists of First Look and Advanced Look components that feature captioned stills from the animation representing the key steps in the processes at varying levels of complexity. These animation-based educational modules are available via the World Wide Web at An in-class research experiment demonstrated that student retention of content material was significantly better when students received a lecture coupled with the animations and then used the animation as an individual study activity. [Abstract/Link to Full Text]

Bradford WD, Cahoon L, Freel SR, Hoopes LL, Eckdahl TT
An inexpensive gel electrophoresis-based polymerase chain reaction method for quantifying mRNA levels.
Cell Biol Educ. 2005;4(2):157-68.
In order to engage their students in a core methodology of the new genomics era, an ever-increasing number of faculty at primarily undergraduate institutions are gaining access to microarray technology. Their students are conducting successful microarray experiments designed to address a variety of interesting questions. A next step in these teaching and research laboratory projects is often validation of the microarray data for individual selected genes. In the research community, this usually involves the use of real-time polymerase chain reaction (PCR), a technology that requires instrumentation and reagents that are prohibitively expensive for most undergraduate institutions. The results of a survey of faculty teaching undergraduates in classroom and research settings indicate a clear need for an alternative approach. We sought to develop an inexpensive and student-friendly gel electrophoresis-based PCR method for quantifying messenger RNA (mRNA) levels using undergraduate researchers as models for students in teaching and research laboratories. We compared the results for three selected genes measured by microarray analysis, real-time PCR, and the gel electrophoresis-based method. The data support the use of the gel electrophoresis-based method as an inexpensive, convenient, yet reliable alternative for quantifying mRNA levels in undergraduate laboratories. [Abstract/Link to Full Text]

Smith AC, Stewart R, Shields P, Hayes-Klosteridis J, Robinson P, Yuan R
Introductory biology courses: a framework to support active learning in large enrollment introductory science courses.
Cell Biol Educ. 2005;4(2):143-56.
Active learning and research-oriented activities have been increasingly used in smaller, specialized science courses. Application of this type of scientific teaching to large enrollment introductory courses has been, however, a major challenge. The general microbiology lecture/laboratory course described has been designed to incorporate published active-learning methods. Three major case studies are used as platforms for active learning. Themes from case studies are integrated into lectures and laboratory experiments, and in class and online discussions and assignments. Students are stimulated to apply facts to problem-solving and to learn research skills such as data analysis, writing, and working in teams. This course is feasible only because of its organizational framework that makes use of teaching teams (made up of faculty, graduate assistants, and undergraduate assistants) and Web-based technology. Technology is a mode of communication, but also a system of course management. The relevance of this model to other biology courses led to assessment and evaluation, including an analysis of student responses to the new course, class performance, a university course evaluation, and retention of course learning. The results are indicative of an increase in student engagement in research-oriented activities and an appreciation of real-world context by students. [Abstract/Link to Full Text]

Guilford WH
"Shrink wrapping" lectures: teaching cell and molecular biology within the context of human pathologies.
Cell Biol Educ. 2005;4(2):138-42.
Students are most motivated and learn best when they are immersed in an environment that causes them to realize why they should learn. Perhaps nowhere is this truer than when teaching the biological sciences to engineers. Transitioning from a traditionally mathematics-based to a traditionally knowledge-based pedagogical style can challenge student learning and engagement. To address this, human pathologies were used as a problem-based context for teaching knowledge-based cell biological mechanisms. Lectures were divided into four modules. First, a disease was presented from clinical, economic, and etiological standpoints. Second, fundamental concepts of cell and molecular biology were taught that were directly relevant to that disease. Finally, we discussed the cellular and molecular basis of the disease based on these fundamental concepts, together with current clinical approaches to the disease. The basic science is thus presented within a "shrink wrap" of disease application. Evaluation of this contextual technique suggests that it is very useful in improving undergraduate student focus and motivation, and offers many advantages to the instructor as well. [Abstract/Link to Full Text]

Eisen A, Batzli JM, Becker D, Fambrough DM, Pearlman R, Shingles R, Brosnan R, Ledbetter ML, Campbell AM
Points of view: a survey of survey courses: are they effective?
Cell Biol Educ. 2005;4(2):123-37. [Abstract/Link to Full Text]

Wieman C
From the National Academies: overview of the National Research Council's Board on Science Education and personal reflections as a science teacher.
Cell Biol Educ. 2005;4(2):118-20. [Abstract/Link to Full Text]

Tanner K, Allen D
Approaches to biology teaching and learning: understanding the wrong answers--teaching toward conceptual change.
Cell Biol Educ. 2005;4(2):112-7. [Abstract/Link to Full Text]

Chattopadhyay A
Understanding of genetic information in higher secondary students in northeast India and the implications for genetics education.
Cell Biol Educ. 2005;4(1):97-104.
Since the work of Watson and Crick in the mid-1950s, the science of genetics has become increasingly molecular. The development of recombinant DNA technologies by the agricultural and pharmaceutical industries led to the introduction of genetically modified organisms (GMOs). By the end of the twentieth century, reports of animal cloning and recent completion of the Human Genome Project (HGP), as well techniques developed for DNA fingerprinting, gene therapy and others, raised important ethical and social issues about the applications of such technologies. For citizens to understand these issues, appropriate genetics education is needed in schools. A good foundation in genetics also requires knowledge and understanding of topics such as structure and function of cells, cell division, and reproduction. Studies at the international level report poor understanding by students of genetics and genetic technologies, with widespread misconceptions at various levels. Similar studies were nearly absent in India. In this study, I examine Indian higher secondary students' understanding of genetic information related to cells and transmission of genetic information during reproduction. Although preliminary in nature, the results provide cause for concern over the status of genetics education in India. The nature of students' conceptual understandings and possible reasons for the observed lack of understanding are discussed. [Abstract/Link to Full Text]

Lindquester GJ, Burks RL, Jaslow CR
Developing information fluency in introductory biology students in the context of an investigative laboratory.
Cell Biol Educ. 2005;4(1):58-96.
Students of biology must learn the scientific method for generating information in the field. Concurrently, they should learn how information is reported and accessed. We developed a progressive set of exercises for the undergraduate introductory biology laboratory that combine these objectives. Pre- and postassessments of approximately 100 students suggest that increases occurred, some statistically significant, in the number of students using various library-related resources, in the numbers and confidence level of students using various technologies, and in the numbers and confidence levels of students involved in various activities related to the scientific method. Following this course, students should be better prepared for more advanced and independent study. [Abstract/Link to Full Text]

Stevens R, Johnson DF, Soller A
Probabilities and predictions: modeling the development of scientific problem-solving skills.
Cell Biol Educ. 2005;4(1):42-57.
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning. [Abstract/Link to Full Text]

Tomanek D, Moreno N, Elgin SC, Flowers S, May V, Dolan E, Tanner K
Points of view: effective partnerships between K-12 and higher education.
Cell Biol Educ. 2005;4(1):28-37. [Abstract/Link to Full Text]

Dimaano C, Pepion K
Meeting report: building bridges for diverse professors of tomorrow.
Cell Biol Educ. 2005;4(1):24-7. [Abstract/Link to Full Text]

Recent Articles in BMC Medical Informatics and Decision Making

O'Cathain A, Munro J, Armstrong I, O'Donnell C, Heaney D
The effect of attitude to risk on decisions made by nurses using computerised decision support software in telephone clinical assessment: an observational study.
BMC Med Inform Decis Mak. 2007 Nov 29;7(1):39.
ABSTRACT: BACKGROUND: There is variation in the decisions made by telephone assessment nurses using computerised decision support software (CDSS). Variation in nurses' attitudes to risk has been identified as a possible explanatory factor. This study was undertaken to explore the effect of nurses' attitudes to risk on the decisions they make when using CDSS. The setting was NHS 24 which is a nationwide telephone assessment service in Scotland in which nurses assess health problems, mainly on behalf of out-of-hours general practice, and triage calls to self care, a service at a later date, or immediate contact with a service. METHODS: All NHS 24 nurses were asked to complete a questionnaire about their background and attitudes to risk. Routine data on the decisions made by these nurses was obtained for a six month period in 2005. Multilevel modelling was used to measure the effect of nurses' risk attitudes on the proportion of calls they sent to self care rather than to services. RESULTS: The response rate to the questionnaire was 57% (265/464). 231,112 calls were matched to 211 of these nurses. 16% (36,342/231,112) of calls were sent to self care, varying three fold between the top and bottom deciles of nurses. Fifteen risk attitude variables were tested, including items on attitudes to risk in clinical decision-making. Attitudes to risk varied greatly between nurses, for example 27% (71/262) of nurses strongly agreed that an NHS 24 nurse "must not take any risks with physical illness" while 17% (45/262) disagreed. After case-mix adjustment, there was some evidence that nurses' attitudes to risk affected decisions but this was inconsistent and unconvincing. CONCLUSIONS: Much of the variation in decision-making by nurses using CDSS remained unexplained. There was no convincing evidence that nurses' attitudes to risk affected the decisions made. This may have been due to the limitations of the instrument used to measure risk attitude. [Abstract/Link to Full Text]

Pollak VE, Lorch JA
Effect of electronic patient record use on mortality in End Stage Renal Disease, a model chronic disease: retrospective analysis of 9 years of prospectively collected data.
BMC Med Inform Decis Mak. 2007 Nov 28;7(1):38.
ABSTRACT: BACKGROUND: In chronic disease, health information technology promises but has yet to demonstrate improved outcomes and decreased costs. The main aim of the study was to determine the effects on mortality and cost of an electronic patient record used in daily patient care in a model chronic disease, End Stage Renal Disease treated by chronic maintenance hemodialysis. Dialysis treatment is highly regulated, and near uniform in treatment modalities and drugs used. METHODS: The particular electronic patient record, patient-centered and extensively coded, was used first in patient care in 3 dialysis units in New York, NY in 1998, 1999, and 2000. All data were stored "live"; none were archived. By December 31, 2006, the patients had been treated by maintenance hemodialysis for a total of 3924 years. A retrospective analysis was made using query tools embedded in the software. The United States Renal Data System dialysis population served as controls. In all there were 1790 patients, with many underlying primary diseases and multiple comorbid conditions affecting many organ systems. Year by year mortality, hospital admissions, and staffing were analyzed, and the data were compared with national data compiled by the United States Renal Data System. RESULTS: Analyzed by calendar year after electronic patient record implementation, mortality decreased strikingly. In years 3-9 mortality was lower than in years 1-2 by 23%, 48%, and 34% in the 3 units, and was 37%, 37%, and 35% less than that reported by the United States Renal Data System. Clinical staffing was 25% fewer per 100 patients than the national average, thereby lowering costs. CONCLUSIONS: To our knowledge, this is the first demonstration that an electronic patient record, albeit of particular design, can have a favorable effect on outcomes and cost in chronic disease. That the population studied has many underlying diseases affecting all organ systems suggests that the electronic patient record design may enable application to many fields of medical practice. [Abstract/Link to Full Text]

Nystrom M, Merkel M, Petersson H, Ahlfeldt H
Creating a medical dictionary using word alignment: The influence of sources and resources.
BMC Med Inform Decis Mak. 2007 Nov 23;7(1):37.
ABSTRACT: BACKGROUND: Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. METHODS: We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. RESULTS: The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. CONCLUSIONS: More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10. [Abstract/Link to Full Text]

Cevenini G, Barbini E, Scolletta S, Biagioli B, Giomarelli P, Barbini P
A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery - Part II: an illustrative example.
BMC Med Inform Decis Mak. 2007 Nov 22;7(1):36.
ABSTRACT: Background Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example. Methods Eight models were developed: Bayes linear and quadratic models, k-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively. Results Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and k-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, k-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results. Conclusions Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery. [Abstract/Link to Full Text]

Barbini E, Cevenini G, Scolletta S, Biagioli B, Giomarelli P, Barbini P
A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery - Part I: model planning.
BMC Med Inform Decis Mak. 2007 Nov 22;7(1):35.
ABSTRACT: Background Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications. Methods Models based on Bayes rule, k-nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view. Results Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. k-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical. Conclusions Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU. [Abstract/Link to Full Text]

Shih HC, Chou P, Liu CM, Tung TH
Estimation of progression of multi-state chronic disease using the Markov model and prevalence pool concept.
BMC Med Inform Decis Mak. 2007 Nov 9;7(1):34.
ABSTRACT: BACKGROUND: We propose a simple new method for estimating progression of a chronic disease with multi-state properties by unifying the prevalence pool concept with the Markov process model. METHODS: Estimation of progression rates in the multi-state model is performed using the E-M algorithm. This approach is applied to data on Type 2 diabetes screening. RESULTS: Good convergence of estimations is demonstrated. In contrast to previous Markov models, the major advantage of our proposed method is that integrating the prevalence pool equation (that the numbers entering the prevalence pool is equal to the number leaving it) into the likelihood function not only simplifies the likelihood function but make estimation of parameters stable. CONCLUSIONS: This approach may be useful in quantifying the progression of a variety of chronic diseases. [Abstract/Link to Full Text]

Blaya JA, Shin SS, Yagui MJ, Yale G, Suarez CZ, Asencios LL, Cegielski JP, Fraser HS
A web-based laboratory information system to improve quality of care of tuberculosis patients in Peru: functional requirements, implementation and usage statistics.
BMC Med Inform Decis Mak. 2007 Oct 28;7(1):33.
ABSTRACT: BACKGROUND: Multi-drug resistant tuberculosis patients in resource-poor settings experience large delays in starting appropriate treatment and may not be monitored appropriately due to an overburdened laboratory system, delays in communication of results, and missing or error-prone laboratory data. The objective of this paper is to describe an electronic laboratory information system implemented to alleviate these problems and its expanding use by the Peruvian public sector, as well as examine the broader issues of implementing such systems in resource-poor settings. METHODS: A web-based laboratory information system "e-Chasqui" has been designed and implemented in Peru to improve the timeliness and quality of laboratory data. It was deployed in the national TB laboratory, two regional laboratories and twelve pilot health centers. Using needs assessment and workflow analysis tools, e-Chasqui was designed to provide for improved patient care, increased quality control, and more efficient laboratory monitoring and reporting. RESULTS: Since its full implementation in March 2006, 29,944 smear microscopy, 31,797 culture and 7,675 drug susceptibility test results have been entered. Over 99% of these results have been viewed online by the health centers. High user satisfaction and heavy use have led to the expansion of e-Chasqui to additional institutions. In total, e-Chasqui will serve a network of institutions providing medical care for over 3.1 million people. The cost to maintain this system is approximately US$0.53 per sample or 1% of the Peruvian 2006 TB program's budget. DISCUSSION: Electronic laboratory information systems have a large potential to improve patient care and public health monitoring in resource-poor settings. Some of the challenges faced in these settings, such as lack of trained personnel, limited transportation, and large coverage areas, are obstacles that a well-designed system can overcome. e-Chasqui has the potential to provide a national TB laboratory network in Peru. Furthermore, the core functionality of e-Chasqui as been implemented in the open source medical record system OpenMRS ( for other countries to use. [Abstract/Link to Full Text]

Frenz CM
Deafness mutation mining using regular expression based pattern matching.
BMC Med Inform Decis Mak. 2007 Oct 25;7(1):32.
ABSTRACT: BACKGROUND: While keyword based queries of databases such as Pubmed are frequently of great utility, the ability to use regular expressions in place of a keyword can often improve the results output by such databases. Regular expressions can allow for the identification of element types that cannot be readily specified by a single keyword and can allow for different words with similar character sequences to be distinguished. RESULTS: A Perl based utility was developed to allow the use of regular expressions in Pubmed searches, thereby improving the accuracy of the searches. CONCLUSIONS: This utility was then utilized to create a comprehensive listing of all DFN deafness mutations discussed in Pubmed records containing the keywords "human ear". [Abstract/Link to Full Text]

Venema AC, van Ginneken AM, de Wilde M, Bogers AJ
Is OpenSDE an alternative for dedicated medical research databases? An example in coronary surgery.
BMC Med Inform Decis Mak. 2007 Oct 22;7(1):31.
ABSTRACT: BACKGROUND: When using a conventional relational database approach to collect and query data in the context of specific clinical studies, a study with a new data set usually requires the design of a new database and entry forms. OpenSDE (SDE = Structured Data Entry) is intended to provide a flexible and intuitive way to create databases and entry forms for the collection of data in a structured format. This study illustrates the use of OpenSDE as a potential alternative to a conventional approach with respect to data modelling, database creation, data entry, and data extraction. METHODS: A database and entry forms are created using OpenSDE and MSAccess to support collection of coronary surgery data, based on the Adult Cardiac Surgery Data Set of the Society of Thoracic Surgeons. Data of 52 cases are entered and nine different queries are designed, and executed on both databases. RESULTS: Design of the data model and the creation of entry forms were experienced as more intuitive and less labor intensive with OpenSDE. Both resulting databases provided sufficient expressiveness to accommodate the data set. Data entry was more flexible with OpenSDE. Queries produced equal and correct results with comparable effort. CONCLUSIONS: For prospective studies involving well-defined and straightforward data sets, OpenSDE deserves to be considered as an alternative to the conventional approach. [Abstract/Link to Full Text]

Legare F, Moher D, Elwyn G, Leblanc A, Gravel K
Instruments to assess the perception of physicians in the decision-making process of specific clinical encounters: a systematic review.
BMC Med Inform Decis Mak. 2007 Oct 15;7(1):30.
ABSTRACT: BACKGROUND: The measurement of processes and outcomes that reflect the complexity of the decision-making process within specific clinical encounters is an important area of research to pursue. A systematic review was conducted to identify instruments that assess the perception physicians have of the decision-making process within specific clinical encounters. METHODS: For every year available up until April 2007, PubMed, PsycINFO, Current Contents, Dissertation Abstracts and Sociological Abstracts were searched for original studies in English or French. Reference lists from retrieved studies were also consulted. Studies were included if they reported a self-administered instrument evaluating physicians' perceptions of the decision-making process within specific clinical encounters, contained sufficient description to permit critical appraisal and presented quantitative results based on administering the instrument. Two individuals independently assessed the eligibility of the instruments and abstracted information on their conceptual underpinnings, main evaluation domain, development, format, reliability, validity and responsiveness. They also assessed the quality of the studies that reported on the development of the instruments with a modified version of STARD. RESULTS: Out of 3431 records identified and screened for evaluation, 26 potentially relevant instruments were assessed; 11 met the inclusion criteria. Five instruments were published before 1995. Among those published after 1995, five offered a corresponding patient version. Overall, the main evaluation domains were: satisfaction with the clinical encounter (n=2), mutual understanding between health professional and patient (n=2), mental workload (n=1), frustration with the clinical encounter (n=1), nurse-physician collaboration (n=1), perceptions of communication competence (n=2), degree of comfort with a decision (n=1) and information on medication (n=1). For most instruments (n=10), some reliability and validity criteria were reported in French or English. Overall, the mean number of items on the modified version of STARD was 12.4 (range: 2 to 18). CONCLUSIONS: This systematic review provides a critical appraisal and repository of instruments that assess the perception physicians have of the decision-making process within specific clinical encounters. More research is needed to pursue the validation of the existing instruments and the development of patient versions. This will help researchers capture the complexity of the decision-making process within specific clinical encounters. [Abstract/Link to Full Text]

Pelat C, Boelle PY, Cowling BJ, Carrat F, Flahault A, Ansart S, Valleron AJ
Online detection and quantification of epidemics.
BMC Med Inform Decis Mak. 2007 Oct 15;7(1):29.
ABSTRACT: BACKGROUND: Time series data are increasingly available in health care, especially for the purpose of disease surveillance. The analysis of such data has long used periodic regression models to detect outbreaks and estimate epidemic burdens. However, implementation of the method may be difficult due to lack of statistical expertise. No dedicated tool is available to perform and guide analyses. RESULTS: We developed an online computer application allowing analysis of epidemiologic time series. The system is available online at The data is assumed to consist of a periodic baseline level and irregularly occurring epidemics. The program allows estimating the periodic baseline level and associated upper forecast limit. The latter defines a threshold for epidemic detection. The burden of an epidemic is defined as the cumulated signal in excess of the baseline estimate. The user is guided through the necessary choices for analysis. We illustrate the usage of the online epidemic analysis tool with two examples: the retrospective detection and quantification of excess pneumonia and influenza (P&I) mortality, and the prospective surveillance of gastrointestinal disease (diarrhoea). CONCLUSIONS: The online application allows easy detection of special events in an epidemiologic time series and quantification of excess mortality/morbidity as a change from baseline. It should be a valuable tool for field and public health practitioners. [Abstract/Link to Full Text]

Craigmile PF, Kim N, Fernandez S, Bonsu B
Modeling and detection of respiratory-related outbreak signatures.
BMC Med Inform Decis Mak. 2007 Oct 5;7(1):28.
ABSTRACT: BACKGROUND: Time series methods are commonly used to detect disease outbreak signatures (e.g., signals due to influenza outbreaks and anthrax attacks) from varying respiratory-related diagnostic or syndromic data sources. Typically this involves two components: (i) Using time series methods to model the baseline background distribution (the time series process that is assumed to contain no outbreak signatures), (ii) Detecting outbreak signatures using filter-based time series methods. METHODS: We consider time series models for chest radiograph data obtained from Midwest children's emergency departments. These models incorporate available covariate information such as patient visit counts and smoothed ambient temperature series, as well as time series dependencies on daily and weekly seasonal scales. Respiratory-related outbreak signature detection is based on filtering the one-step-ahead prediction errors obtained from the time series models for the respiratory-complaint background. RESULTS: Using simulation experiments based on a stochastic model for an anthrax attack, we illustrate the effect of the choice of filter and the statistical models upon radiograph-attributed outbreak signature detection. CONCLUSIONS: We demonstrate the importance of using seasonal autoregressive integrated average time series models (SARIMA) with covariates in the modeling of respiratory-related time series data. We find some homogeneity in the time series models for the respiratory-complaint backgrounds across the Midwest emergency departments studied. Our simulations show that the balance between specificity, sensitivity, and timeliness to detect an outbreak signature differs by the emergency department and the choice of filter. The linear and exponential filters provide a good balance. [Abstract/Link to Full Text]

Reynolds GM, Peet AC, Arvanitis TN
Generating prior probabilities for classifiers of brain tumours using belief networks.
BMC Med Inform Decis Mak. 2007;727.
BACKGROUND: Numerous methods for classifying brain tumours based on magnetic resonance spectra and imaging have been presented in the last 15 years. Generally, these methods use supervised machine learning to develop a classifier from a database of cases for which the diagnosis is already known. However, little has been published on developing classifiers based on mixed modalities, e.g. combining imaging information with spectroscopy. In this work a method of generating probabilities of tumour class from anatomical location is presented. METHODS: The method of "belief networks" is introduced as a means of generating probabilities that a tumour is any given type. The belief networks are constructed using a database of paediatric tumour cases consisting of data collected over five decades; the problems associated with using this data are discussed. To verify the usefulness of the networks, an application of the method is presented in which prior probabilities were generated and combined with a classification of tumours based solely on MRS data. RESULTS: Belief networks were constructed from a database of over 1300 cases. These can be used to generate a probability that a tumour is any given type. Networks are presented for astrocytoma grades I and II, astrocytoma grades III and IV, ependymoma, pineoblastoma, primitive neuroectodermal tumour (PNET), germinoma, medulloblastoma, craniopharyngioma and a group representing rare tumours, "other". Using the network to generate prior probabilities for classification improves the accuracy when compared with generating prior probabilities based on class prevalence. CONCLUSION: Bayesian belief networks are a simple way of using discrete clinical information to generate probabilities usable in classification. The belief network method can be robust to incomplete datasets. Inclusion of a priori knowledge is an effective way of improving classification of brain tumours by non-invasive methods. [Abstract/Link to Full Text]

Street AF, Swift K, Annells M, Woodruff R, Gliddon T, Oakley A, Ottmann G
Developing a web-based information resource for palliative care: an action-research inspired approach.
BMC Med Inform Decis Mak. 2007 Sep 14;7(1):26.
ABSTRACT: BACKGROUND: General Practitioners and community nurses rely on easily accessible, evidence-based online information to guide practice. To date, the methods that underpin the scoping of user-identified online information needs in palliative care have remained under-explored. This paper describes the benefits and challenges of a collaborative approach involving users and experts that informed the first stage of the development of a palliative care website [1]. METHOD: The action research-inspired methodology included a panel assessment of an existing palliative care website based in Victoria, Australia; a pre-development survey (n=197) scoping potential audiences and palliative care information needs; working parties conducting a needs analysis about necessary information content for a redeveloped website targeting health professionals and caregivers/patients; an iterative evaluation process involving users and experts; as well as a final evaluation survey (n=166). RESULTS: Involving users in the identification of content and links for a palliative care website is time-consuming and requires initial resources, strong networking skills and commitment. However, user participation provided crucial information that led to the widened the scope of the website audience and guided the development and testing of the website. The needs analysis underpinning the project suggests that palliative care peak bodies need to address three distinct audiences (clinicians, allied health professionals as well as patients and their caregivers). CONCLUSIONS: Web developers should pay close attention to the content, language, and accessibility needs of these groups. Given the substantial cost associated with the maintenance of authoritative health information sites, the paper proposes a more collaborative development in which users can be engaged in the definition of content to ensure relevance and responsiveness, and to eliminate unnecessary detail. Access to volunteer networks forms an integral part of such an approach. [Abstract/Link to Full Text]

Mandl KD, Simons WW, Crawford WC, Abbett JM
Indivo: a personally controlled health record for health information exchange and communication.
BMC Med Inform Decis Mak. 2007;725.
BACKGROUND: Personally controlled health records (PCHRs), a subset of personal health records (PHRs), enable a patient to assemble, maintain and manage a secure copy of his or her medical data. Indivo (formerly PING) is an open source, open standards PCHR with an open application programming interface (API). RESULTS: We describe how the PCHR platform can provide standard building blocks for networked PHR applications. Indivo allows the ready integration of diverse sources of medical data under a patient's control through the use of standards-based communication protocols and APIs for connecting PCHRs to existing and future health information systems. CONCLUSION: The strict and transparent personal control model is designed to encourage widespread participation by patients, healthcare providers and institutions, thus creating the ecosystem for development of innovative, consumer-focused healthcare applications. [Abstract/Link to Full Text]

Curioso WH, Kurth AE
Access, use and perceptions regarding Internet, cell phones and PDAs as a means for health promotion for people living with HIV in Peru.
BMC Med Inform Decis Mak. 2007;724.
BACKGROUND: Internet tools, cell phones, and other information and communication technologies are being used by HIV-positive people on their own initiative. Little is known about the perceptions of HIV-positive people towards these technologies in Peru. The purpose of this paper is to report on perceptions towards use of information and communication technologies as a means to support antiretroviral medication adherence and HIV transmission risk reduction. METHODS: We conducted a qualitative study (in-depth interviews) among adult people living with HIV in two community-based clinics in Peru. RESULTS: 31 HIV-positive individuals in Lima were interviewed (n = 28 men, 3 women). People living with HIV in Peru are using tools such as cell phones, and the Internet (via E-mail, chat, list-serves) to support their HIV care and to make social and sexual connections. In general, they have positive perceptions about using the Internet, cell phones and PDAs for HIV health promotion interventions. CONCLUSION: Health promotion interventions using information and communication technology tools among people living with HIV in resource-constrained settings may be acceptable and feasible, and can build on existing patterns of use. [Abstract/Link to Full Text]

Shi H, Lyons-Weiler J
Clinical decision modeling system.
BMC Med Inform Decis Mak. 2007;723.
ABSTRACT: BACKGROUND: Decision analysis techniques can be applied in complex situations involving uncertainty and the consideration of multiple objectives. Classical decision modeling techniques require elicitation of too many parameter estimates and their conditional (joint) probabilities, and have not therefore been applied to the problem of identifying high-performance, cost-effective combinations of clinical options for diagnosis or treatments where many of the objectives are unknown or even unspecified. METHODS: We designed a Java-based software resource, the Clinical Decision Modeling System (CDMS), to implement Naďve Decision Modeling, and provide a use case based on published performance evaluation measures of various strategies for breast and lung cancer detection. Because cost estimates for many of the newer methods are not yet available, we assume equal cost. Our use case reveals numerous potentially high-performance combinations of clinical options for the detection of breast and lung cancer. RESULTS: Naďve Decision Modeling is a highly practical applied strategy which guides investigators through the process of establishing evidence-based integrative translational clinical research priorities. CDMS is not designed for clinical decision support. Inputs include performance evaluation measures and costs of various clinical options. The software finds trees with expected emergent performance characteristics and average cost per patient that meet stated filtering criteria. Key to the utility of the software is sophisticated graphical elements, including a tree browser, a receiver-operator characteristic surface plot, and a histogram of expected average cost per patient. The analysis pinpoints the potentially most relevant pairs of clinical options ('critical pairs') for which empirical estimates of conditional dependence may be critical. The assumption of independence can be tested with retrospective studies prior to the initiation of clinical trials designed to estimate clinical impact. High-performance combinations of clinical options may exist for breast and lung cancer detection. CONCLUSION: The software could be found useful in simplifying the objective-driven planning of complex integrative clinical studies without requiring a multi-attribute utility function, and it could lead to efficient integrative translational clinical study designs that move beyond simple pair wise competitive studies. Collaborators, who traditionally might compete to prioritize their own individual clinical options, can use the software as a common framework and guide to work together to produce increased understanding on the benefits of using alternative clinical combinations to affect strategic and cost-effective clinical workflows. [Abstract/Link to Full Text]

Lottridge DM, Chignell M, Danicic-Mizdrak R, Pavlovic NJ, Kushniruk A, Straus SE
Group differences in physician responses to handheld presentation of clinical evidence: a verbal protocol analysis.
BMC Med Inform Decis Mak. 2007;722.
BACKGROUND: To identify individual differences in physicians' needs for the presentation of evidence resources and preferences for mobile devices. METHODS: Within-groups analysis of responses to semi-structured interviews. Interviews consisted of using prototypes in response to task-based scenarios. The prototypes were implemented on two different form factors: a tablet style PC and a pocketPC. Participants were from three user groups: general internists, family physicians and medicine residents, and from two different settings: urban and semi-urban. Verbal protocol analysis, which consists of coding utterances, was conducted on the transcripts of the testing sessions. Statistical relationships were investigated between staff physicians' and residents' background variables, self-reported experiences with the interfaces, and verbal code frequencies. RESULTS: 47 physicians were recruited from general internal medicine, family practice clinics and a residency training program. The mean age of participants was 42.6 years. Physician specialty had a greater effect on device and information-presentation preferences than gender, age, setting or previous technical experience. Family physicians preferred the screen size of the tablet computer and were less concerned about its portability. Residents liked the screen size of the tablet, but preferred the portability of the pocketPC. Internists liked the portability of the pocketPC, but saw less advantage to the large screen of the tablet computer (F[2,44] = 4.94, p = .012). CONCLUSION: Different types of physicians have different needs and preferences for evidence-based resources and handheld devices. This study shows how user testing can be incorporated into the process of design to inform group-based customization. [Abstract/Link to Full Text]

Graham ID, Logan J, Bennett CL, Presseau J, O'Connor AM, Mitchell SL, Tetroe JM, Cranney A, Hebert P, Aaron SD
Physicians' intentions and use of three patient decision aids.
BMC Med Inform Decis Mak. 2007;720.
BACKGROUND: Decision aids are evidence based tools that assist patients in making informed values-based choices and supplement the patient-clinician interaction. While there is evidence to show that decision aids improve key indicators of patients' decision quality, relatively little is known about physicians' acceptance of decision aids or factors that influence their decision to use them. The purpose of this study was to describe physicians' perceptions of three decision aids, their expressed intent to use them, and their subsequent use of them. METHODS: We conducted a cross-sectional survey of random samples of Canadian respirologists, family physicians, and geriatricians. Three decision aids representing a range of health decisions were evaluated. The survey elicited physicians' opinions on the characteristics of the decision aid and their willingness to use it. Physicians who indicated a strong likelihood of using the decision aid were contacted three months later regarding their actual use of the decision aid. RESULTS: Of the 580 eligible physicians, 47% (n = 270) returned completed questionnaires. More than 85% of the respondents felt the decision aid was well developed and that it presented the essential information for decision making in an understandable, balanced, and unbiased manner. A majority of respondents (>80%) also felt that the decision aid would guide patients in a logical way, preparing them to participate in decision making and to reach a decision. Fewer physicians (<60%) felt the decision aid would improve the quality of patient visits or be easily implemented into practice and very few (27%) felt that the decision aid would save time. Physicians' intentions to use the decision aid were related to their comfort with offering it to patients, the decision aid topic, and the perceived ease of implementing it into practice. While 54% of the surveyed physicians indicated they would use the decision aid, less than a third followed through with this intention. CONCLUSION: Despite strong support for the format, content, and quality of patient decision aids, and physicians' stated intentions to adopt them into clinical practice, most did not use them within three months of completing the survey. There is a wide gap between intention and behaviour. Further research is required to study the determinants of this intention-behaviour gap and to develop interventions aimed at barriers to physicians' use of decision aids. [Abstract/Link to Full Text]

Koum G, Yekel A, Ndifon B, Etang J, Simard F
Design of a two-level Adaptive Multi-Agent System for malaria vectors driven by an ontology.
BMC Med Inform Decis Mak. 2007;719.
BACKGROUND: The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. METHODS: To solve the problem we define an Adaptive Multi-Agent System (AMAS) which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. RESULTS: Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. CONCLUSION: Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general. [Abstract/Link to Full Text]

Sladek RM, Tieman J, Currow DC
Improving search filter development: a study of palliative care literature.
BMC Med Inform Decis Mak. 2007;718.
BACKGROUND: It is difficult to systematically search for literature relevant to palliative care in general medical journals. A previously developed search filter for use on OVID Medline validated using a gold standard set of references identified through hand searching, achieved an unacceptably low sensitivity (45.4%). Retrieving relevant literature is integral to support evidence based practice, and understanding the nature of the incorrectly excluded citations (false negatives) using the filter may lead to improvement in the filter's performance. METHODS: The objectives were to describe the nature of subjects reflected in the false negative citations and to empirically improve the sensitivity of the search filter. A thematic analysis of MeSH terms by three independent reviewers was used to describe the subject coverage of the missed records. Using a frequency analysis of MeSH terms, those headings which could individually contribute at least 2.5% to sensitivity (occurring 19 or more times) were added to the search filter. All previously run searches were rerun at the same time as the revised filter, and results compared. RESULTS: Thematic analysis of MeSH terms identified thirteen themes reflected in the missing records, none of them intrinsically palliative. The addition of six MeSH terms to the existing search filter (physician-patient relations, prognosis, quality of life, survival rate, treatment outcome and attitude to health) led to an increase in sensitivity from 46.3% to 64.7%, offset by a decrease in precision from 72.6% to 21.9%. CONCLUSION: The filter's sensitivity was successfully increased using frequency analysis of MeSH terms, offset by a decrease in precision. A thematic analysis of MeSH terms for the false negative citations confirmed the absence of any intrinsically palliative theme or term, suggesting that future improvements to search filters for palliative care literature will first depend on better identifying how clinicians and researchers conceptualise palliative care. It is suggested that a constellation of parameters: stage of disease (advanced or active), prospect of cure (little or none), and treatment goals (primarily quality of life) may ultimately inform search strategies. This may be similarly true for chronic diseases, which share the inherent passage of time which marks them apart from acute, and therefore more readily identifiable, episodes of care. [Abstract/Link to Full Text]

Yu W, Yesupriya A, Wulf A, Qu J, Gwinn M, Khoury MJ
An automatic method to generate domain-specific investigator networks using PubMed abstracts.
BMC Med Inform Decis Mak. 2007;717.
BACKGROUND: Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. RESULTS: We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit) as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8%) and from 94.2% of HuGE PubMed records (accuracy 87.0). We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit), indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70-90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. CONCLUSION: We successfully created a web-based prototype capable of creating domain-specific investigator networks based on an application that accurately generates detailed investigator profiles from PubMed abstracts combined with robust standard vocabularies. This approach could be used for other biomedical fields to efficiently establish domain-specific investigator networks. [Abstract/Link to Full Text]

Schardt C, Adams MB, Owens T, Keitz S, Fontelo P
Utilization of the PICO framework to improve searching PubMed for clinical questions.
BMC Med Inform Decis Mak. 2007;716.
BACKGROUND: Supporting 21st century health care and the practice of evidence-based medicine (EBM) requires ubiquitous access to clinical information and to knowledge-based resources to answer clinical questions. Many questions go unanswered, however, due to lack of skills in formulating questions, crafting effective search strategies, and accessing databases to identify best levels of evidence. METHODS: This randomized trial was designed as a pilot study to measure the relevancy of search results using three different interfaces for the PubMed search system. Two of the search interfaces utilized a specific framework called PICO, which was designed to focus clinical questions and to prompt for publication type or type of question asked. The third interface was the standard PubMed interface readily available on the Web. Study subjects were recruited from interns and residents on an inpatient general medicine rotation at an academic medical center in the US. Thirty-one subjects were randomized to one of the three interfaces, given 3 clinical questions, and asked to search PubMed for a set of relevant articles that would provide an answer for each question. The success of the search results was determined by a precision score, which compared the number of relevant or gold standard articles retrieved in a result set to the total number of articles retrieved in that set. RESULTS: Participants using the PICO templates (Protocol A or Protocol B) had higher precision scores for each question than the participants who used Protocol C, the standard PubMed Web interface. (Question 1: A = 35%, B = 28%, C = 20%; Question 2: A = 5%, B = 6%, C = 4%; Question 3: A = 1%, B = 0%, C = 0%) 95% confidence intervals were calculated for the precision for each question using a lower boundary of zero. However, the 95% confidence limits were overlapping, suggesting no statistical difference between the groups. CONCLUSION: Due to the small number of searches for each arm, this pilot study could not demonstrate a statistically significant difference between the search protocols. However there was a trend towards higher precision that needs to be investigated in a larger study to determine if PICO can improve the relevancy of search results. [Abstract/Link to Full Text]

Wieland SC, Brownstein JS, Berger B, Mandl KD
Automated real time constant-specificity surveillance for disease outbreaks.
BMC Med Inform Decis Mak. 2007;715.
BACKGROUND: For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. RESULTS: We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p < 0.05). We develop an outbreak detection method, called the expectation-variance model, based on generalized additive modeling to achieve a constant specificity by accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. CONCLUSION: Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems. [Abstract/Link to Full Text]

Cruz-Correia RJ, Vieira-Marques PM, Ferreira AM, Almeida FC, Wyatt JC, Costa-Pereira AM
Reviewing the integration of patient data: how systems are evolving in practice to meet patient needs.
BMC Med Inform Decis Mak. 2007;714.
BACKGROUND: The integration of Information Systems (IS) is essential to support shared care and to provide consistent care to individuals--patient-centred care. This paper identifies, appraises and summarises studies examining different approaches to integrate patient data from heterogeneous IS. METHODS: The literature was systematically reviewed between 1995-2005 to identify articles mentioning patient records, computers and data integration or sharing. RESULTS: Of 3124 articles, 84 were included describing 56 distinct projects. Most of the projects were on a regional scale. Integration was most commonly accomplished by messaging with pre-defined templates and middleware solutions. HL7 was the most widely used messaging standard. Direct database access and web services were the most common communication methods. The user interface for most systems was a Web browser. Regarding the type of medical data shared, 77% of projects integrated diagnosis and problems, 67% medical images and 65% lab results. More recently significantly more IS are extending to primary care and integrating referral letters. CONCLUSION: It is clear that Information Systems are evolving to meet people's needs by implementing regional networks, allowing patient access and integration of ever more items of patient data. Many distinct technological solutions coexist to integrate patient data, using differing standards and data architectures which may difficult further interoperability. [Abstract/Link to Full Text]

Witteman CL, Renooij S, Koele P
Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales.
BMC Med Inform Decis Mak. 2007;713.
BACKGROUND: In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. METHODS: General practitioners (GPs) gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. RESULTS: No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. CONCLUSION: We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals. [Abstract/Link to Full Text]

Akl EA, Grant BJ, Guyatt GH, Montori VM, Schünemann HJ
A decision aid for COPD patients considering inhaled steroid therapy: development and before and after pilot testing.
BMC Med Inform Decis Mak. 2007;712.
BACKGROUND: Decision aids (DA) are tools designed to help patients make specific and deliberative choices among disease management options. DAs can improve the quality of decision-making and reduce decisional conflict. An area not covered by a DA is the decision of a patient with chronic obstructive pulmonary disease (COPD) to use inhaled steroids which requires balancing the benefits and downsides of therapy. METHODS: We developed a DA for COPD patients considering inhaled steroid therapy using the Ottawa Decision Support Framework, the best available evidence for using inhaled steroid in COPD and the expected utility model. The development process involved patients, pulmonologists, DA developers and decision making experts. We pilot tested the DA with 8 COPD patients who completed an evaluation questionnaire, a knowledge scale, and a validated decisional conflict scale. RESULTS: The DA is a computer-based interactive tool incorporating four different decision making models. In the first part, the DA provides information about COPD as a disease, the different treatment options, and the benefits and downsides of using inhaled steroids. In the second part, it coaches the patient in the decision making process through clarifying values and preferences. Patients evaluated 10 out of 13 items of the DA positively and showed significant improvement on both the knowledge scale (p = 0.008) and the decisional conflict scale (p = 0.008). CONCLUSION: We have developed a computer-based interactive DA for COPD patients considering inhaled steroids serving as a model for other DAs in COPD, in particular related to inhaled therapies. Future research should assess the DA effectiveness. [Abstract/Link to Full Text]

Daumer M, Neuhaus A, Lederer C, Scholz M, Wolinsky JS, Heiderhoff M
Prognosis of the individual course of disease--steps in developing a decision support tool for Multiple Sclerosis.
BMC Med Inform Decis Mak. 2007;711.
BACKGROUND: Multiple sclerosis is a chronic disease of uncertain aetiology. Variations in its disease course make it difficult to impossible to accurately determine the prognosis of individual patients. The Sylvia Lawry Centre for Multiple Sclerosis Research (SLCMSR) developed an "online analytical processing (OLAP)" tool that takes advantage of extant clinical trials data and allows one to model the near term future course of this chronic disease for an individual patient. RESULTS: For a given patient the most similar patients of the SLCMSR database are intelligently selected by a model-based matching algorithm integrated into an OLAP-tool to enable real time, web-based statistical analyses. The underlying database (last update April 2005) contains 1,059 patients derived from 30 placebo arms of controlled clinical trials. Demographic information on the entire database and the portion selected for comparison are displayed. The result of the statistical comparison is provided as a display of the course of Expanded Disability Status Scale (EDSS) for individuals in the database with regions of probable progression over time, along with their mean relapse rate. Kaplan-Meier curves for time to sustained progression in the EDSS and time to requirement of constant assistance to walk (EDSS 6) are also displayed. The software-application OLAP anticipates the input MS patient's course on the basis of baseline values and the known course of disease for similar patients who have been followed in clinical trials. CONCLUSION: This simulation could be useful for physicians, researchers and other professionals who counsel patients on therapeutic options. The application can be modified for studying the natural history of other chronic diseases, if and when similar datasets on which the OLAP operates exist. [Abstract/Link to Full Text]

Chen R, Enberg G, Klein GO
Julius--a template based supplementary electronic health record system.
BMC Med Inform Decis Mak. 2007;710.
BACKGROUND: EHR systems are widely used in hospitals and primary care centres but it is usually difficult to share information and to collect patient data for clinical research. This is partly due to the different proprietary information models and inconsistent data quality. Our objective was to provide a more flexible solution enabling the clinicians to define which data to be recorded and shared for both routine documentation and clinical studies. The data should be possible to reuse through a common set of variable definitions providing a consistent nomenclature and validation of data. Another objective was that the templates used for the data entry and presentation should be possible to use in combination with the existing EHR systems. METHODS: We have designed and developed a template based system (called Julius) that was integrated with existing EHR systems. The system is driven by the medical domain knowledge defined by clinicians in the form of templates and variable definitions stored in a common data repository. The system architecture consists of three layers. The presentation layer is purely web-based, which facilitates integration with existing EHR products. The domain layer consists of the template design system, a variable/clinical concept definition system, the transformation and validation logic all implemented in Java. The data source layer utilizes an object relational mapping tool and a relational database. RESULTS: The Julius system has been implemented, tested and deployed to three health care units in Stockholm, Sweden. The initial responses from the pilot users were positive. The template system facilitates patient data collection in many ways. The experience of using the template system suggests that enabling the clinicians to be in control of the system, is a good way to add supplementary functionality to the present EHR systems. CONCLUSION: The approach of the template system in combination with various local EHR systems can facilitate the sharing and reuse of validated clinical information from different health care units. However, future system developments for these purposes should consider using the openEHR/CEN models with shareable archetypes. [Abstract/Link to Full Text]

Vikström A, Skĺnér Y, Strender LE, Nilsson GH
Mapping the categories of the Swedish primary health care version of ICD-10 to SNOMED CT concepts: rule development and intercoder reliability in a mapping trial.
BMC Med Inform Decis Mak. 2007;79.
BACKGROUND: Terminologies and classifications are used for different purposes and have different structures and content. Linking or mapping terminologies and classifications has been pointed out as a possible way to achieve various aims as well as to attain additional advantages in describing and documenting health care data. The objectives of this study were: to explore and develop rules to be used in a mapping process, to evaluate intercoder reliability and the assessed degree of concordance when the 'Swedish primary health care version of the International Classification of Diseases version 10' (ICD-10) is matched to the Systematized Nomenclature of Medicine, Clinical Terms (SNOMED CT), to describe characteristics in the coding systems that are related to obstacles to high quality mapping. METHODS: Mapping (interpretation, matching, assessment and rule development) was done by two coders. The Swedish primary health care version of ICD-10 with 972 codes was randomly divided into an allotment of three sets of categories, used in three mapping sequences, A, B and C. Mapping was done independently by the coders and new rules were developed between the sequences. Intercoder reliability was measured by comparing the results after each set. The extent of matching was assessed as either 'partly' or 'completely concordant' RESULTS: General principles for mapping were outlined before the first sequence, A. New mapping rules had significant impact on the results between sequences A-B (p < 0.01) and A-C (p < 0.001). The intercoder reliability in our study reached 83%. Obstacles to high quality mapping were mainly a lack of agreement by the coders due to structural and content factors in SNOMED CT and in the current ICD-10 version. The predominant reasons for this were difficulties in interpreting the meaning of the categories in the current ICD-10 version, and the presence of many related concepts in SNOMED CT. CONCLUSION: Mapping from ICD-10-categories to SNOMED CT needs clear and extensive rules. It is possible to reach high intercoder reliability in mapping from ICD-10-categories to SNOMED CT. However, several obstacles to high quality mapping remain due to structure and content characteristics in both coding systems. [Abstract/Link to Full Text]

Recent Articles in Bulletin of the Medical Library Association

Proceedings, one hundredth annual meeting, medical library association, inc. Vancouver, british columbia, Canada may 5-11, 2000.
Bull Med Libr Assoc. 2001 Jan;89(1):97-125. [Abstract/Link to Full Text]

Plutchak TS
Keep those cards and letters coming!
Bull Med Libr Assoc. 2000 Jul;88(3):261-2. [Abstract/Link to Full Text]

Squires SJ
Proceedings, Ninety-ninth Annual Meeting Medical Library Association, Inc. Chicago, Illinois May 14-19, 1999.
Bull Med Libr Assoc. 2000 Jan;88(1):97-132. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1999 Oct;87(4):489. [Abstract/Link to Full Text]

Kronenfeld MR
Bull Med Libr Assoc. 1999 Oct;87(4):381-2. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1999 Jul;87(3):355. [Abstract/Link to Full Text]

Schloman BF
Bull Med Libr Assoc. 1999 Jul;87(3):275-6. [Abstract/Link to Full Text]

Kronenfeld MR
Bull Med Libr Assoc. 1999 Jul;87(3):241-2. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1999 Apr;87(2):225. [Abstract/Link to Full Text]

Proceedings, Ninety-eighth Annual Meeting Medical Library Association, Inc. Philadelphia, Pennsylvania May 23-27, 1998.
Bull Med Libr Assoc. 1999 Jan;87(1):111-39. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1999 Jan;87(1):97. [Abstract/Link to Full Text]

Revision of BMLA information for authors.
Bull Med Libr Assoc. 1999 Jan;87(1):93-4. [Abstract/Link to Full Text]

Messerle J
Bull Med Libr Assoc. 1999 Jan;87(1):85. [Abstract/Link to Full Text]

Shontz D
Effect of fines on length of checkout and overdues in a medical library.
Bull Med Libr Assoc. 1999 Jan;87(1):82-4. [Abstract/Link to Full Text]

Carr AF, Stibravy R
Designing a Web bookmarks page for reference desk use.
Bull Med Libr Assoc. 1999 Jan;87(1):80-2. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1998 Oct;86(4):617-8. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1998 Jul;86(3):425. [Abstract/Link to Full Text]

Homan JM
Whither peer review: Prague '97.
Bull Med Libr Assoc. 1998 Jul;86(3):421-2. [Abstract/Link to Full Text]

Detlefsen EG, Ball AL, Su LT
Bull Med Libr Assoc. 1998 Jul;86(3):377-9. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1998 Apr;86(2):283. [Abstract/Link to Full Text]

Introduction: the medical library association as a reflection on the profession and society.
Bull Med Libr Assoc. 1998 Apr;86(2):249-50. [Abstract/Link to Full Text]

Browning W
Association of Medical Librarians. Minutes of first meeting.
Bull Med Libr Assoc. 1998 Apr;86(2):228. [Abstract/Link to Full Text]

Presentation of marcia C. Noyes award.
Bull Med Libr Assoc. 1998 Apr;86(2):192. [Abstract/Link to Full Text]

Introduction: focusing on the personalities of the medical library association.
Bull Med Libr Assoc. 1998 Apr;86(2):155-6. [Abstract/Link to Full Text]

Ledbetter LS
Proceedings, Ninety-seventh Annual Meeting Medical Library Association, Inc. Seattle, Washington May 24-28, 1997.
Bull Med Libr Assoc. 1998 Jan;86(1):117-43. [Abstract/Link to Full Text]

100 YEARS OF MLA Views from the Bulletin.
Bull Med Libr Assoc. 1998 Jan;86(1):103. [Abstract/Link to Full Text]

The Bulletin celebrates MLA's centennial.
Bull Med Libr Assoc. 1998 Jan;86(1):101. [Abstract/Link to Full Text]

Plutchak TS
Bull Med Libr Assoc. 2001 Oct;89(4):409-10. [Abstract/Link to Full Text]

Rambo N, Zenan JS, Alpi KM, Burroughs CM, Cahn MA, Rankin J
Public Health Outreach Forum: lessons learned.
Bull Med Libr Assoc. 2001 Oct;89(4):403-6. [Abstract/Link to Full Text]

Zenan JS, Rambo N, Burroughs CM, Alpi KM, Cahn MA, Rankin J
Public Health Outreach Forum: report.
Bull Med Libr Assoc. 2001 Oct;89(4):400-3. [Abstract/Link to Full Text]

Recent Articles in Journal of the American Medical Informatics Association

Rosenbloom ST
Approaches to evaluating electronic prescribing.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):399-401. [Abstract/Link to Full Text]

Hales JW
Presentation of the Morris F. Collen Award to Reed McArthur Gardner, PhD.
J Am Med Inform Assoc. 2006 May-Jun;13(3):356-9. [Abstract/Link to Full Text]

Judge J, Field TS, DeFlorio M, Laprino J, Auger J, Rochon P, Bates DW, Gurwitz JH
Prescribers' responses to alerts during medication ordering in the long term care setting.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):385-90.
OBJECTIVE: Computerized physician order entry with clinical decision support has been shown to improve medication safety in adult inpatients, but few data are available regarding its usefulness in the long-term care setting. The objective of this study was to examine opportunities for improving medication safety in that clinical setting by determining the proportion of medication orders that would generate a warning message to the prescriber via a computerized clinical decision support system and assessing the extent to which these alerts would affect prescribers' actions. DESIGN: The study was set within a randomized controlled trial of computerized clinical decision support conducted in the long-stay units of a large, academically-affiliated long-term care facility. In March 2002, a computer-based clinical decision support system (CDSS) was added to an existing computerized physician order entry (CPOE) system. Over a subsequent one-year study period, prescribers ordering drugs for residents on three resident-care units of the facility were presented with alerts; these alerts were not displayed to prescribers in the four control units. MEASUREMENTS: We assessed the frequency of drug orders associated with various categories of alerts across all participating units of the facility. To assess the impact of actually receiving an alert on prescriber behavior during drug ordering, we calculated separately for the intervention and control units the proportion of the alerts, within each category, that were followed by an appropriate action and estimated the relative risk of an appropriate action in the intervention units compared to the control units. RESULTS: During the 12 months of the study, there were 445 residents on the participating units of the facility, contributing 3,726 resident-months of observation time. During this period, 47,997 medication orders were entered through the CPOE system-approximately 9 medication orders per resident per month. 9,414 alerts were triggered (2.5 alerts per resident-month). The alert categories most often triggered were related to risks of central nervous system side-effects such as over-sedation (20%). Alerts for risk of drug-associated constipation (13%) or renal insufficiency/electrolyte imbalance (12%) were also common. Twelve percent of the alerts were related to orders for warfarin. Overall, prescribers who received alerts were only slightly more likely to take an appropriate action (relative risk 1.11, 95% confidence interval 1.00, 1.22). Alerts related to orders for warfarin or central nervous system side effects were most likely to engender an appropriate action, such as ordering a recommended laboratory test or canceling an ordered drug. CONCLUSION: Long-term care facilities must implement new system-level approaches with the potential to improve medication safety for their residents. The number of medication orders that triggered a warning message in this study suggests that CPOE with a clinical decision support system may represent one such tool. However, the relatively low rate of response to these alerts suggests that further refinements to such systems are required, and that their impact on medication errors and adverse drug events must be carefully assessed. [Abstract/Link to Full Text]

Cho I, Park HA
Evaluation of the expressiveness of an ICNP-based nursing data dictionary in a computerized nursing record system.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):456-64.
This study evaluated the domain completeness and expressiveness issues of the International Classification for Nursing Practice-based (ICNP) nursing data dictionary (NDD) through its application in an enterprise electronic medical record (EMR) system as a standard vocabulary at a single tertiary hospital in Korea. Data from 2,262 inpatients obtained over a period of 9 weeks (May to July 2003) were extracted from the EMR system for analysis. Among the 530,218 data-input events, 401,190 (75.7%) were entered from the NDD, 20,550 (3.9%) used only free text, and 108,478 (20.4%) used a combination of coded data and free text. A content analysis of the free-text events showed that 80.3% of the expressions could be found in the NDD, whereas 10.9% were context-specific expressions such as direct quotations of patient complaints and responses, and references to the care plan or orders of physicians. A total of 7.8% of the expressions was used for a supplementary purpose such as adding a conjunction or end verb to make an expression appear as natural language. Only 1.0% of the expressions were identified as not being covered by the NDD. This evaluation study demonstrates that the ICNP-based NDD has sufficient power to cover most of the expressions used in a clinical nursing setting. [Abstract/Link to Full Text]

Payne TH, Graham G
Managing the life cycle of electronic clinical documents.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):438-45.
OBJECTIVE: To develop a model of the life cycle of clinical documents from inception to use in a person's medical record, including workflow requirements from clinical practice, local policy, and regulation. DESIGN: We propose a model for the life cycle of clinical documents as a framework for research on documentation within electronic medical record (EMR) systems. Our proposed model includes three axes: the stages of the document, the roles of those involved with the document, and the actions those involved may take on the document at each stage. The model includes the rules to describe who (in what role) can perform what actions on the document, and at what stages they can perform them. Rules are derived from needs of clinicians, and requirements of hospital bylaws and regulators. RESULTS: Our model encompasses current practices for paper medical records and workflow in some EMR systems. Commercial EMR systems include methods for implementing document workflow rules. Workflow rules that are part of this model mirror functionality in the Department of Veterans Affairs (VA) EMR system where the Authorization/ Subscription Utility permits document life cycle rules to be written in English-like fashion. CONCLUSIONS: Creating a model of the life cycle of clinical documents serves as a framework for discussion of document workflow, how rules governing workflow can be implemented in EMR systems, and future research of electronic documentation. [Abstract/Link to Full Text]

Johnson KB, Fitzhenry F
Case report: activity diagrams for integrating electronic prescribing tools into clinical workflow.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):391-5.
To facilitate the future implementation of an electronic prescribing system, this case study modeled prescription management processes in various primary care settings. The Vanderbilt e-prescribing design team conducted initial interviews with clinic managers, physicians and nurses, and then represented the sequences of steps carried out to complete prescriptions in activity diagrams. The diagrams covered outpatient prescribing for patients during a clinic visit and between clinic visits. Practice size, practice setting, and practice specialty type influenced the prescribing processes used. The model developed may be useful to others engaged in building or tailoring an e-prescribing system to meet the specific workflows of various clinic settings. [Abstract/Link to Full Text]

Liu N, Marenco L, Miller PL
ResourceLog: an embeddable tool for dynamically monitoring the usage of web-based bioscience resources.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):432-7.
The present study described an open source application, ResourceLog, that allows website administrators to record and analyze the usage of online resources. The application includes four components: logging, data mining, administrative interface, and back-end database. The logging component is embedded in the host website. It extracts and streamlines information about the Web visitors, the scripts, and dynamic parameters from each page request. The data mining component runs as a set of scheduled tasks that identify visitors of interest, such as those who have heavily used the resources. The identified visitors will be automatically subjected to a voluntary user survey. The usage of the website content can be monitored through the administrative interface and subjected to statistical analyses. As a pilot project, ResourceLog has been implemented in SenseLab, a Web-based neuroscience database system. ResourceLog provides a robust and useful tool to aid system evaluation of a resource-driven Web application, with a focus on determining the effectiveness of data sharing in the field and with the general public. [Abstract/Link to Full Text]

Hensel BK, Demiris G, Courtney KL
Defining obtrusiveness in home telehealth technologies: a conceptual framework.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):428-31.
The literature of home telehealth technology recommends that systems be designed to minimize their obtrusiveness to end users. However, this term is neither explicitly defined nor consistently used. This paper presents a definition of the concept of obtrusiveness. Within this definition, twenty-two categories of what may be perceived as obtrusive in home telehealth technology are proposed based on a review of the literature. These categories are grouped into eight dimensions. This effort represents an initial step toward developing measures of obtrusiveness associated with home telehealth technology. A validated and reliable instrument would allow for evaluation of individual applications as well as theory-building across applications. [Abstract/Link to Full Text]

Aphinyanaphongs Y, Statnikov A, Aliferis CF
A comparison of citation metrics to machine learning filters for the identification of high quality MEDLINE documents.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):446-55.
OBJECTIVE: The present study explores the discriminatory performance of existing and novel gold-standard-specific machine learning (GSS-ML) focused filter models (i.e., models built specifically for a retrieval task and a gold standard against which they are evaluated) and compares their performance to citation count and impact factors, and non-specific machine learning (NS-ML) models (i.e., models built for a different task and/or different gold standard). DESIGN: Three gold standard corpora were constructed using the SSOAB bibliography, the ACPJ-cited treatment articles, and the ACPJ-cited etiology articles. Citation counts and impact factors were obtained for each article. Support vector machine models were used to classify the articles using combinations of content, impact factors, and citation counts as predictors. MEASUREMENTS: Discriminatory performance was estimated using the area under the receiver operating characteristic curve and n-fold cross-validation. RESULTS: For all three gold standards and tasks, GSS-ML filters outperformed citation count, impact factors, and NS-ML filters. Combinations of content with impact factor or citation count produced no or negligible improvements to the GSS machine learning filters. CONCLUSIONS: These experiments provide evidence that when building information retrieval filters focused on a retrieval task and corresponding gold standard, the filter models have to be built specifically for this task and gold standard. Under those conditions, machine learning filters outperform standard citation metrics. Furthermore, citation counts and impact factors add marginal value to discriminatory performance. Previous research that claimed better performance of citation metrics than machine learning in one of the corpora examined here is attributed to using machine learning filters built for a different gold standard and task. [Abstract/Link to Full Text]

Sanders DL, Aronsky D
Biomedical informatics applications for asthma care: a systematic review.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):418-27.
Asthma is a common condition associated with significant patient morbidity and health care costs. Although widely accepted evidence-based guidelines for asthma management exist, unnecessary variation in patient care remains. Application of biomedical informatics techniques is one potential way to improve care for asthmatic patients. We performed a systematic literature review to identify computerized applications for clinical asthma care. Studies were evaluated for their clinical domain, developmental stage and study design. Additionally, prospective trials were identified and analyzed for potential study biases, study effects, and clinical study characteristics. Sixty-four papers were selected for review. Publications described asthma detection or diagnosis (18 papers), asthma monitoring or prevention (13 papers), patient education (13 papers), and asthma guidelines or therapy (20 papers). The majority of publications described projects in early stages of development or with non-prospective study designs. Twenty-one prospective trials were identified, which evaluated both clinical and non-clinical impacts on patient care. Most studies took place in the outpatient clinic environment, with minimal study of the emergency department or inpatient settings. Few studies demonstrated evidence of computerized applications improving clinical outcomes. Further research is needed to prospectively evaluate the impact of using biomedical informatics to improve care of asthmatic patients. [Abstract/Link to Full Text]

Strayer SM, Slawson DC, Shaughnessy AF
Disseminating drug prescribing information: the cox-2 inhibitors withdrawals.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):396-8.
This case study examined the recent withdrawal of valdecoxib to determine the timeliness of updates in commonly used information sources used by healthcare professionals. The method included assembling a purposive sample of 15 drug reference and warning systems that were then systematically monitored for several months after the withdrawal of valdecoxib to determine the time to update this information. These information sources were classified and described qualitatively. A time to diffusion curve was plotted and the average number of days to report the drug withdrawal or update reference databases was calculated. Only 2 of 15 information systems reported the drug withdrawal on the actual date of the FDA announcement. Institutional electronic textbooks took an average of 109.8 days (+/-14 days) to report the withdrawal. In addition, one pharma-sponsored dissemination source (Peerview Press) had not updated their information as of this publication. [Abstract/Link to Full Text]

McGregor JC, Weekes E, Forrest GN, Standiford HC, Perencevich EN, Furuno JP, Harris AD
Impact of a computerized clinical decision support system on reducing inappropriate antimicrobial use: a randomized controlled trial.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):378-84.
OBJECTIVE: Many hospitals utilize antimicrobial management teams (AMTs) to improve patient care. However, most function with minimal computer support. We evaluated the effectiveness and cost-effectiveness of a computerized clinical decision support system for the management of antimicrobial utilization. DESIGN: A randomized controlled trial in adult inpatients between May 10 and August 3, 2004. Antimicrobial utilization was managed by an existing AMT using the system in the intervention arm and without the system in the control arm. The system was developed to alert the AMT of potentially inadequate antimicrobial therapy. MEASUREMENTS: Outcomes assessed were hospital antimicrobial expenditures, mortality, length of hospitalization, and time spent managing antimicrobial utilization. RESULTS: The AMT intervened on 359 (16%) of 2,237 patients in the intervention arm and 180 (8%) of 2,270 in the control arm, while spending approximately one hour less each day on the intervention arm. Hospital antimicrobial expenditures were $285,812 in the intervention arm and $370,006 in the control arm, for a savings of $84,194 (23%), or $37.64 per patient. No significant difference was observed in mortality (3.26% vs. 2.95%, p = 0.55) or length of hospitalization (3.84 vs. 3.99 days, p = 0.38). CONCLUSION: Use of the system facilitated the management of antimicrobial utilization by allowing the AMT to intervene on more patients receiving inadequate antimicrobial therapy and to achieve substantial time and cost savings for the hospital. This is the first study that demonstrates in a patient-randomized controlled trial that computerized clinical decision support systems can improve existing antimicrobial management programs. [Abstract/Link to Full Text]

Niland JC, Rouse L, Stahl DC
An informatics blueprint for healthcare quality information systems.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):402-17.
There is a critical gap in our nation's ability to accurately measure and manage the quality of medical care. A robust healthcare quality information system (HQIS) has the potential to address this deficiency through the capture, codification, and analysis of information about patient treatments and related outcomes. Because non-technical issues often present the greatest challenges, this paper provides an overview of these socio-technical issues in building a successful HQIS, including the human, organizational, and knowledge management (KM) perspectives. Through an extensive literature review and direct experience in building a practical HQIS (the National Comprehensive Cancer Network Outcomes Research Database system), we have formulated an "informatics blueprint" to guide the development of such systems. While the blueprint was developed to facilitate healthcare quality information collection, management, analysis, and reporting, the concepts and advice provided may be extensible to the development of other types of clinical research information systems. [Abstract/Link to Full Text]

Kuperman GJ, Reichley RM, Bailey TC
Using commercial knowledge bases for clinical decision support: opportunities, hurdles, and recommendations.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):369-71. [Abstract/Link to Full Text]

Kilbridge PM, Campbell UC, Cozart HB, Mojarrad MG
Automated surveillance for adverse drug events at a community hospital and an academic medical center.
J Am Med Inform Assoc. 2006 Jul-Aug;13(4):372-7.
OBJECTIVES: To compare the rates and nature of ADEs at an academic medical center and a community hospital using a single computerized ADE surveillance system. DESIGN: Prospective cohort study of patients admitted to two tertiary care hospitals. Outcome Measure Adverse drug events identified by automated surveillance and voluntary reporting. METHODS: We implemented an automated surveillance system across an academic medical center and a community hospital. Potential events identified by the computer were reviewed in detail by medication safety pharmacists and scored for causality and severity. Findings were compared between the two hospitals, and with voluntary reports from nurses and pharmacists. RESULTS: Over the 8 month study period, 25,177 patients were admitted to the university hospital and 8,029 to the community hospital. There were 1,116 ADEs in 900 patients at the university hospital for an overall rate of 4.4 ADEs per 100 admissions. At the community hospital, 399 patients experienced 501 ADEs for a rate of 6.2 events per 100 admissions. Rates of antibiotic-associated colitis, drug-induced hypoglycemia, and anticoagulation-related ADEs were significantly higher at the community hospital compared with the university hospital. Computerized surveillance detected ADEs at a rate 3.6 times that of voluntary reporting at the university hospital and 12.3 times that at the community hospital. CONCLUSIONS: Operation of a common automated ADE surveillance system across hospitals permits meaningful comparison of ADE rates in different inpatient settings. Automated surveillance detects ADEs at rates far higher than voluntary reporting, and the difference may be greater in the community hospital setting. Community hospitals may experience higher rates of certain types of ADEs compared with academic medical centers. [Abstract/Link to Full Text]

Sun JY, Sun Y
A system for automated lexical mapping.
J Am Med Inform Assoc. 2006 May-Jun;13(3):334-43.
OBJECTIVE: To automate the mapping of disparate databases to standardized medical vocabularies. Background: Merging of clinical systems and medical databases, or aggregation of information from disparate databases, frequently requires a process whereby vocabularies are compared and similar concepts are mapped. DESIGN: Using a normalization phase followed by a novel alignment stage inspired by DNA sequence alignment methods, automated lexical mapping can map terms from various databases to standard vocabularies such as the UMLS (Unified Medical Language System) and LOINC (Logical Observation Identifier Names and Codes). MEASUREMENTS: This automated lexical mapping was evaluated using three real-world laboratory databases from different health care institutions. The authors report the sensitivity, specificity, percentage correct (true positives plus true negatives divided by total number of terms), and true positive and true negative rates as measures of system performance. RESULTS: The alignment algorithm was able to map 57% to 78% (average of 63% over all runs and databases) of equivalent concepts through lexical mapping alone. True positive rates ranged from 18% to 70%; true negative rates ranged from 5% to 52%. CONCLUSION: Lexical mapping can facilitate the integration of data from diverse sources and decrease the time and cost required for manual mapping and integration of clinical systems and medical databases. [Abstract/Link to Full Text]

Kurc T, Janies DA, Johnson AD, Langella S, Oster S, Hastings S, Habib F, Camerlengo T, Ervin D, Catalyurek UV, Saltz JH
An XML-based system for synthesis of data from disparate databases.
J Am Med Inform Assoc. 2006 May-Jun;13(3):289-301.
Diverse data sets have become key building blocks of translational biomedical research. Data types captured and referenced by sophisticated research studies include high throughput genomic and proteomic data, laboratory data, data from imagery, and outcome data. In this paper, the authors present the application of an XML-based data management system to support integration of data from disparate data sources and large data sets. This system facilitates management of XML schemas and on-demand creation and management of XML databases that conform to these schemas. They illustrate the use of this system in an application for genotype-phenotype correlation analyses. This application implements a method of phenotype-genotype correlation based on phylogenetic optimization of large data sets of mouse SNPs and phenotypic data. The application workflow requires the management and integration of genomic information and phenotypic data from external data repositories and from the results of phenotype-genotype correlation analyses. Our implementation supports the process of carrying out a complex workflow that includes large-scale phylogenetic tree optimizations and application of Maddison's concentrated changes test to large phylogenetic tree data sets. The data management system also allows collaborators to share data in a uniform way and supports complex queries that target data sets. [Abstract/Link to Full Text]

Yu C, Liu Z, McKenna T, Reisner AT, Reifman J
A method for automatic identification of reliable heart rates calculated from ECG and PPG waveforms.
J Am Med Inform Assoc. 2006 May-Jun;13(3):309-20.
OBJECTIVE: The development and application of data-driven decision-support systems for medical triage, diagnostics, and prognostics pose special requirements on physiologic data. In particular, that data are reliable in order to produce meaningful results. The authors describe a method that automatically estimates the reliability of reference heart rates (HRr) derived from electrocardiogram (ECG) waveforms and photoplethysmogram (PPG) waveforms recorded by vital-signs monitors. The reliability is quantitatively expressed through a quality index (QI) for each HRr. DESIGN: The proposed method estimates the reliability of heart rates from vital-signs monitors by (1) assessing the quality of the ECG and PPG waveforms, (2) separately computing heart rates from these waveforms, and (3) concisely combining this information into a QI that considers the physical redundancy of the signal sources and independence of heart rate calculations. The assessment of the waveforms is performed by a Support Vector Machine classifier and the independent computation of heart rate from the waveforms is performed by an adaptive peak identification technique, termed ADAPIT, which is designed to filter out motion-induced noise. RESULTS: The authors evaluated the method against 158 randomly selected data samples of trauma patients collected during helicopter transport, each sample consisting of 7-second ECG and PPG waveform segments and their associated HRr. They compared the results of the algorithm against manual analysis performed by human experts and found that in 92% of the cases, the algorithm either matches or is more conservative than the human's QI qualification. In the remaining 8% of the cases, the algorithm infers a less conservative QI, though in most cases this was because of algorithm/human disagreement over ambiguous waveform quality. If these ambiguous waveforms were relabeled, the misclassification rate would drop from 8% to 3%. CONCLUSION: This method provides a robust approach for automatically assessing the reliability of large quantities of heart rate data and the waveforms from which they are derived. [Abstract/Link to Full Text]

Khan AN, Griffith SP, Moore C, Russell D, Rosario AC, Bertolli J
Standardizing laboratory data by mapping to LOINC.
J Am Med Inform Assoc. 2006 May-Jun;13(3):353-5.
The authors describe a pilot project to standardize local laboratory data at five Indian Health Service (IHS) medical facilities by mapping laboratory test names to Logical Observation Identifier Names and Codes (LOINC). An automated mapping tool was developed to assign LOINC codes. At these sites, they were able to map from 63% to 76% of the local active laboratory tests to LOINC using the mapping tool. Eleven percent to 27% of the tests were mapped manually. They could not assign LOINC codes to 6% to 19% of the laboratory tests due to incomplete or incorrect information about these tests. The results achieved approximate other similar efforts. Mapping of laboratory test names to LOINC codes will allow IHS to aggregate laboratory data more easily for disease surveillance and clinical and administrative reporting efforts. This project may provide a model for standardization efforts in other health systems. [Abstract/Link to Full Text]

Rosenbloom ST, Qi X, Riddle WR, Russell WE, DonLevy SC, Giuse D, Sedman AB, Spooner SA
Implementing pediatric growth charts into an electronic health record system.
J Am Med Inform Assoc. 2006 May-Jun;13(3):302-8.
Electronic health record (EHR) systems are increasingly being adopted in pediatric practices; however, requirements for integrated growth charts are poorly described and are not standardized in current systems. The authors integrated growth chart functionality into an EHR system being developed and installed in a multispecialty pediatric clinic in an academic medical center. During a three-year observation period, rates of electronically documented values for weight, stature, and head circumference increased from fewer than ten total per weekday, up to 488 weight values, 293 stature values, and 74 head circumference values (p<0.001 for each measure). By the end of the observation period, users accessed the growth charts an average 175 times per weekday, compared to 127 patient visits per weekday to the sites that most closely monitored pediatric growth. Because EHR systems and integrated growth charts can manipulate data, perform calculations, and adapt to user preferences and patient characteristics, users may expect greater functionality from electronic growth charts than from paper-based growth charts. [Abstract/Link to Full Text]

Rosenbloom ST, Miller RA, Johnson KB, Elkin PL, Brown SH
Interface terminologies: facilitating direct entry of clinical data into electronic health record systems.
J Am Med Inform Assoc. 2006 May-Jun;13(3):277-88.
Previous investigators have defined clinical interface terminology as a systematic collection of health care-related phrases (terms) that supports clinicians' entry of patient-related information into computer programs, such as clinical "note capture" and decision support tools. Interface terminologies also can facilitate display of computer-stored patient information to clinician-users. Interface terminologies "interface" between clinicians' own unfettered, colloquial conceptualizations of patient descriptors and the more structured, coded internal data elements used by specific health care application programs. The intended uses of a terminology determine its conceptual underpinnings, structure, and content. As a result, the desiderata for interface terminologies differ from desiderata for health care-related terminologies used for storage (e.g., SNOMED-CT), information retrieval (e.g., MeSH), and classification (e.g., ICD9-CM). Necessary but not sufficient attributes for an interface terminology include adequate synonym coverage, presence of relevant assertional knowledge, and a balance between pre- and post-coordination. To place interface terminologies in context, this article reviews historical goals and challenges of clinical terminology development in general and then focuses on the unique features of interface terminologies. [Abstract/Link to Full Text]

Ferranti JM, Musser RC, Kawamoto K, Hammond WE
The clinical document architecture and the continuity of care record: a critical analysis.
J Am Med Inform Assoc. 2006 May-Jun;13(3):245-52.
Health care provides many opportunities in which the sharing of data between independent sites is highly desirable. Several standards are required to produce the functional and semantic interoperability necessary to support the exchange of such data: a common reference information model, a common set of data elements, a common terminology, common data structures, and a common transport standard. This paper addresses one component of that set of standards: the ability to create a document that supports the exchange of structured data components. Unfortunately, two different standards development organizations have produced similar standards for that purpose based on different information models: Health Level 7 (HL7)'s Clinical Document Architecture (CDA) and The American Society for Testing and Materials (ASTM International) Continuity of Care Record (CCR). The coexistence of both standards might require mapping from one standard to the other, which could be accompanied by a loss of information and functionality. This paper examines and compares the two standards, emphasizes the strengths and weaknesses of each, and proposes a strategy of harmonization to enhance future progress. While some of the authors are members of HL7 and/or ASTM International, the authors stress that the viewpoints represented in this paper are those of the authors and do not represent the official viewpoints of either HL7 or of ASTM International. [Abstract/Link to Full Text]

Green JM, Wilcke JR, Abbott J, Rees LP
Development and evaluation of methods for structured recording of heart murmur findings using SNOMED-CT post-coordination.
J Am Med Inform Assoc. 2006 May-Jun;13(3):321-33.
OBJECTIVE: This study evaluated an existing SNOMED-CT model for structured recording of heart murmur findings and compared it to a concept-dependent attributes model using content from SNOMED-CT. METHODS: The authors developed a model for recording heart murmur findings as an alternative to SNOMED-CT's use of Interprets and Has interpretation. A micro-nomenclature was then created to support each model using subset and extension mechanisms described for SNOMED-CT. Each micro-nomenclature included a partonomy of cardiac cycle timing values. A mechanism for handling ranges of values was also devised. One hundred clinical heart murmurs were recorded using purpose-built recording software based on both models. RESULTS: Each micro-nomenclature was extended through the addition of the same list of concepts. SNOMED role grouping was required in both models. All 100 clinical murmurs were described using each model. The only major differences between the two models were the number of relationship rows required for storage and the hierarchical assignments of concepts within the micro-nomenclatures. CONCLUSION: The authors were able to capture 100 clinical heart murmurs with both models. Requirements for implementing the two models were virtually identical. In fact, data stored using these models could be easily interconverted. There is no apparent penalty for implementing either approach. [Abstract/Link to Full Text]

Kaushal R, Jha AK, Franz C, Glaser J, Shetty KD, Jaggi T, Middleton B, Kuperman GJ, Khorasani R, Tanasijevic M, Bates DW
Return on investment for a computerized physician order entry system.
J Am Med Inform Assoc. 2006 May-Jun;13(3):261-6.
OBJECTIVE: Although computerized physician order entry (CPOE) may decrease errors and improve quality, hospital adoption has been slow. The high costs and limited data on financial benefits of CPOE systems are a major barrier to adoption. The authors assessed the costs and financial benefits of the CPOE system at Brigham and Women's Hospital over ten years. DESIGN: Cost and benefit estimates of a hospital CPOE system at Brigham and Women's Hospital (BWH), a 720-adult bed, tertiary care, academic hospital in Boston. MEASUREMENTS: Institutional experts provided data about the costs of the CPOE system. Benefits were determined from published studies of the BWH CPOE system, interviews with hospital experts, and relevant internal documents. Net overall savings to the institution and operating budget savings were determined. All data are presented as value figures represented in 2002 dollars. RESULTS: Between 1993 and 2002, the BWH spent $11.8 million to develop, implement, and operate CPOE. Over ten years, the system saved BWH $28.5 million for cumulative net savings of $16.7 million and net operating budget savings of $9.5 million given the institutional 80% prospective reimbursement rate. The CPOE system elements that resulted in the greatest cumulative savings were renal dosing guidance, nursing time utilization, specific drug guidance, and adverse drug event prevention. The CPOE system at BWH has resulted in substantial savings, including operating budget savings, to the institution over ten years. CONCLUSION: Other hospitals may be able to save money and improve patient safety by investing in CPOE systems. [Abstract/Link to Full Text]

Frisse ME
Comments on return on investment (ROI) as it applies to clinical systems.
J Am Med Inform Assoc. 2006 May-Jun;13(3):365-7. [Abstract/Link to Full Text]

Schleyer TK, Thyvalikakath TP, Spallek H, Torres-Urquidy MH, Hernandez P, Yuhaniak J
Clinical computing in general dentistry.
J Am Med Inform Assoc. 2006 May-Jun;13(3):344-52.
OBJECTIVE: Measure the adoption and utilization of, opinions about, and attitudes toward clinical computing among general dentists in the United States. DESIGN: Telephone survey of a random sample of 256 general dentists in active practice in the United States. MEASUREMENTS: A 39-item telephone interview measuring practice characteristics and information technology infrastructure; clinical information storage; data entry and access; attitudes toward and opinions about clinical computing (features of practice management systems, barriers, advantages, disadvantages, and potential improvements); clinical Internet use; and attitudes toward the National Health Information Infrastructure. RESULTS: The authors successfully screened 1,039 of 1,159 randomly sampled U.S. general dentists in active practice (89.6% response rate). Two hundred fifty-six (24.6%) respondents had computers at chairside and thus were eligible for this study. The authors successfully interviewed 102 respondents (39.8%). Clinical information associated with administration and billing, such as appointments and treatment plans, was stored predominantly on the computer; other information, such as the medical history and progress notes, primarily resided on paper. Nineteen respondents, or 1.8% of all general dentists, were completely paperless. Auxiliary personnel, such as dental assistants and hygienists, entered most data. Respondents adopted clinical computing to improve office efficiency and operations, support diagnosis and treatment, and enhance patient communication and perception. Barriers included insufficient operational reliability, program limitations, a steep learning curve, cost, and infection control issues. CONCLUSION: Clinical computing is being increasingly adopted in general dentistry. However, future research must address usefulness and ease of use, workflow support, infection control, integration, and implementation issues. [Abstract/Link to Full Text]

Gurses AP, Xiao Y
A systematic review of the literature on multidisciplinary rounds to design information technology.
J Am Med Inform Assoc. 2006 May-Jun;13(3):267-76.
Multidisciplinary rounds (MDR) have become important mechanisms for communication and coordination of care. To guide design of tools supporting MDR, we reviewed the literature published from 1990 to 2005 about MDR on information tools used, information needs, impact of information tools, and evaluation measures. Fifty-one papers met inclusion criteria and were included. In addition to patient-centric information tools (e.g., medical chart) and decision-support tools (e.g., clinical pathway), process-oriented tools (e.g., rounding list) were reported to help with information organization and communication. Information tools were shown to improve situation awareness of multidisciplinary care providers, efficiency of MDR, and length of stay. Communication through MDR may be improved by process-oriented information tools that help information organization, communication, and work management, which could be achieved through automatic extraction from clinical information systems, displays and printouts in condensed forms, at-a-glance representations of the care unit, and storing work-process information temporarily. [Abstract/Link to Full Text]

Tierney WM, Beck EJ, Gardner RM, Musick B, Shields M, Shiyonga NM, Spohr MH
Viewpoint: a pragmatic approach to constructing a minimum data set for care of patients with HIV in developing countries.
J Am Med Inform Assoc. 2006 May-Jun;13(3):253-60.
Providing quality health care requires access to continuous patient data that developing countries often lack. A panel of medical informatics specialists, clinical human immunodeficiency virus (HIV) specialists, and program managers suggests a minimum data set for supporting the management and monitoring of patients with HIV and their care programs in developing countries. The proposed minimum data set consists of data for registration and scheduling, monitoring and improving practice management, and describing clinical encounters and clinical care. Data should be numeric or coded using standard definitions and minimal free text. To enhance accuracy, efficiency, and availability, data should be recorded electronically by those generating them. Data elements must be sufficiently detailed to support clinical algorithms/guidelines and aggregation into broader categories for consumption by higher level users (e.g., national and international health care agencies). The proposed minimum data set will evolve over time as funding increases, care protocols change, and additional tests and treatments become available for HIV-infected patients in developing countries. [Abstract/Link to Full Text]

Halamka J, Aranow M, Ascenzo C, Bates DW, Berry K, Debor G, Fefferman J, Glaser J, Heinold J, Stanley J, Stone DL, Sullivan TE, Tripathi M, Wilkinson B
E-Prescribing collaboration in Massachusetts: early experiences from regional prescribing projects.
J Am Med Inform Assoc. 2006 May-Jun;13(3):239-44.
Massachusetts payers and providers have encouraged clinician usage of e-Prescribing technology to improve patient safety, enhance office practice efficiencies, and reduce medical costs. This report describes three early pilot e-Prescribing projects as case studies. These projects identified the e-Prescribing needs of clinicians, illustrated key issues that made implementation difficult, and clarified the impact of various types of functionality. The authors identified ten key barriers: (1) previous negative technology experiences, (2) initial and long-term cost, (3) lost productivity, (4) competing priorities, (5) change management issues, (6) interoperability limitations, (7) information technology (IT) requirements, (8) standards limitations, (9) waiting for an "all-in-one solution," and (10) confusion about competing product offerings including hospital/Integrated Delivery System (IDN)-sponsored projects. In Massachusetts, regional projects have helped to address these barriers, and e-Prescribing activities are accelerating rapidly within the state. [Abstract/Link to Full Text]

Ozdas A, Speroff T, Waitman LR, Ozbolt J, Butler J, Miller RA
Integrating "best of care" protocols into clinicians' workflow via care provider order entry: impact on quality-of-care indicators for acute myocardial infarction.
J Am Med Inform Assoc. 2006 Mar-Apr;13(2):188-96.
OBJECTIVE: In the context of an inpatient care provider order entry (CPOE) system, to evaluate the impact of a decision support tool on integration of cardiology "best of care" order sets into clinicians' admission workflow, and on quality measures for the management of acute myocardial infarction (AMI) patients. DESIGN: A before-and-after study of physician orders evaluated (1) per-patient use rates of standardized acute coronary syndrome (ACS) order set and (2) patient-level compliance with two individual recommendations: early aspirin ordering and beta-blocker ordering. MEASUREMENTS: The effectiveness of the intervention was evaluated for (1) all patients with ACS (suspected for AMI at the time of admission) (N = 540) and (2) the subset of the ACS patients with confirmed discharge diagnosis of AMI (n = 180) who comprise the recommended target population who should receive aspirin and/or beta-blockers. Compliance rates for use of the ACS order set, aspirin ordering, and beta-blocker ordering were calculated as the percentages of patients who had each action performed within 24 hours of admission. RESULTS: For all ACS admissions, the decision support tool significantly increased use of the ACS order set (p = 0.009). Use of the ACS order set led, within the first 24 hours of hospitalization, to a significant increase in the number of patients who received aspirin (p = 0.001) and a nonsignificant increase in the number of patients who received beta-blockers (p = 0.07). Results for confirmed AMI cases demonstrated similar increases, but did not reach statistical significance. CONCLUSION: The decision support tool increased optional use of the ACS order set, but room for additional improvement exists. [Abstract/Link to Full Text]

Recent Articles in Journal of the Medical Library Association

Tu F
Knowledge and skills required to provide health information-related virtual reference services: evidence from a survey.
J Med Libr Assoc. 2007 Oct;95(4):458-61. [Abstract/Link to Full Text]

Shachak A, Shuval K, Fine S
Barriers and enablers to the acceptance of bioinformatics tools: a qualitative study.
J Med Libr Assoc. 2007 Oct;95(4):454-8. [Abstract/Link to Full Text]

Barnett MC, Keener MW
Expanding medical library support in response to the National Institutes of Health Public Access Policy.
J Med Libr Assoc. 2007 Oct;95(4):450-3. [Abstract/Link to Full Text]

Kim S, Chung DS
Characteristics of cancer blog users.
J Med Libr Assoc. 2007 Oct;95(4):445-50. [Abstract/Link to Full Text]

Shultz M
Comparing test searches in PubMed and Google Scholar.
J Med Libr Assoc. 2007 Oct;95(4):442-5. [Abstract/Link to Full Text]

Walker TA, Howard DL, Washington CR, Godley PA
Development of a health sciences library at a historically black college and university (HBCU): laying the foundation for increased minority health and health disparities research.
J Med Libr Assoc. 2007 Oct;95(4):439-41. [Abstract/Link to Full Text]

Leisey MR, Shipman JP
Information prescriptions: a barrier to fulfillment.
J Med Libr Assoc. 2007 Oct;95(4):435-8.
OBJECTIVES: The aim of this project was to identify and compare physician-perceived versus patient-experienced barriers to filling information prescriptions. METHODS: Physicians participated in a focus group designed to identify any issues linked to the implementation of the project. Telephone interviews were conducted with patients to gather details of the challenges encountered as well as to collect general health information-seeking practices. RESULTS: Although physicians identified several obstacles patients may encounter, it was not possible to identify patient barriers as no patient indicated having received an information prescription. In the focus group, physicians reported not using the term "information prescription," thus undermining one of the intrinsic tenets of the project. CONCLUSIONS: Although designed with the physicians' input, the study results demonstrated a disconnect in the information prescription process. The addition of intervention fidelity measures may have ensured a more positive outcome. [Abstract/Link to Full Text]

Grefsheim SF, Rankin JA
Information needs and information seeking in a biomedical research setting: a study of scientists and science administrators.
J Med Libr Assoc. 2007 Oct;95(4):426-34.
OBJECTIVE: An information needs study of clinical specialists and biomedical researchers was conducted at the US National Institutes of Health (NIH) to inform library services and contribute to a broader understanding of information use in academic and research settings. METHODS: A random stratified sample by job category of 500 NIH scientists was surveyed by telephone by an independent consultant using a standardized information industry instrument, augmented with locally developed questions. Results were analyzed for statistical significance using t- tests and chi square. Findings were compared with published studies and an aggregated dataset of information users in business, government, and health care from Outsell. RESULTS: The study results highlighted similarities and differences with other studies and the industry standard, providing insights into user preferences, including new technologies. NIH scientists overwhelmingly used the NIH Library (424/500), began their searches at the library's Website rather than Google (P = or< 0.001), were likely to seek information themselves (474/500), and valued desktop resources and services. CONCLUSION: While NIH staff work in a unique setting, they share some information characteristics with other researchers. The findings underscored the need to continue assessing specialized needs and seek innovative solutions. The study led to improvements or expansion of services such as developing a Website search engine, organizing gene sequence data, and assisting with manuscript preparation. [Abstract/Link to Full Text]

Dee CR
The development of the Medical Literature Analysis and Retrieval System (MEDLARS).
J Med Libr Assoc. 2007 Oct;95(4):416-25.
OBJECTIVE: The research provides a chronology of the US National Library of Medicine's (NLM's) contribution to access to the world's biomedical literature through its computerization of biomedical indexes, particularly the Medical Literature Analysis and Retrieval System (MEDLARS). METHOD: Using material gathered from NLM's archives and from personal interviews with people associated with developing MEDLARS and its associated systems, the author discusses key events in the history of MEDLARS. DISCUSSION: From the development of the early mechanized bibliographic retrieval systems of the 1940s and to the beginnings of online, interactive computerized bibliographic search systems of the early 1970s chronicled here, NLM's contributions to automation and bibliographic retrieval have been extensive. CONCLUSION: As NLM's technological experience and expertise grew, innovative bibliographic storage and retrieval systems emerged. NLM's accomplishments regarding MEDLARS were cutting edge, placing the library at the forefront of incorporating mechanization and technologies into medical information systems. [Abstract/Link to Full Text]

Rethlefsen ML, Wallis LC
Public health citation patterns: an analysis of the American Journal of Public Health, 2003-2005.
J Med Libr Assoc. 2007 Oct;95(4):408-15.
OBJECTIVES: The research sought to determine the publication types cited most often in public health as well as the most heavily cited journal titles. METHODS: From a pool of 33,449 citations in 934 articles published in the 2003-2005 issues of American Journal of Public Health, 2 random samples were drawn: one (n = 1,034) from the total set of citations and one (n = 1,016) from the citations to journal articles. For each sampled citation, investigators noted publication type, publication date, uniform resource locator (URL) citation (yes/no), and, for the journal article sample, journal titles. The cited journal titles were analyzed using Bradford zones. RESULTS: The majority of cited items from the overall sample of 1,034 items were journal articles (64.4%, n = 666), followed by government documents (n = 130), books (n = 122), and miscellaneous sources (n = 116). Publication date ranged from 1826-2005 (mean = 1995, mode = 2002). Most cited items were between 0 and 5 years old (50.3%, n = 512). In the sample of 1,016 journal article citations, a total of 387 journal titles were cited. DISCUSSION: Analysis of cited material types revealed results similar to citation analyses in specific public health disciplines, including use of materials from a wide range of disciplines, reliance on miscellaneous and government documents, and need for older publications. [Abstract/Link to Full Text]

Kronenfeld M, Stephenson PL, Nail-Chiwetalu B, Tweed EM, Sauers EL, McLeod TC, Guo R, Trahan H, Alpi KM, Hill B, Sherwill-Navarro P, Allen MP, Stephenson PL, Hartman LM, Burnham J, Fell D, Kronenfeld M, Pavlick R, MacNaughton EW, Nail-Chiwetalu B, Ratner NB
Review for librarians of evidence-based practice in nursing and the allied health professions in the United States.
J Med Libr Assoc. 2007 Oct;95(4):394-407.
OBJECTIVE: This paper provides an overview of the state of evidence-based practice (EBP) in nursing and selected allied health professions and a synopsis of current trends in incorporating EBP into clinical education and practice in these fields. This overview is intended to better equip librarians with a general understanding of the fields and relevant information resources. INCLUDED PROFESSIONS: Professions are athletic training, audiology, health education and promotion, nursing, occupational therapy, physical therapy, physician assisting, respiratory care, and speech-language pathology. APPROACH: Each section provides a description of a profession, highlighting changes that increase the importance of clinicians' access to and use of the profession's knowledgebase, and a review of each profession's efforts to support EBP. The paper concludes with a discussion of the librarian's role in providing EBP support to the profession. CONCLUSIONS: EBP is in varying stages of growth among these fields. The evolution of EBP is evidenced by developments in preservice training, growth of the literature and resources, and increased research funding. Obstacles to EBP include competing job tasks, the need for additional training, and prevalent attitudes and behaviors toward research among practitioners. Librarians' skills in searching, organizing, and evaluating information can contribute to furthering the development of EBP in a given profession. [Abstract/Link to Full Text]

Rankins J, Kirksey O, Bogan Y, Brown B
Impact of a low-intensity pedagogical model for integrating MedlinePlus exercises into middle school nutrition lessons.
J Med Libr Assoc. 2007 Oct;95(4):388-93.
OBJECTIVE: The research developed and pilot-tested MedlinePlus exercises in a diet-related chronic disease prevention (DCDP) middle school lesson unit called "Live." METHODS: MedlinePlus exercises were jointly developed by two middle school family and consumer sciences (FCS) teachers and integrated into the "Live" DCDP lesson unit. FCS classes (n = 4) who had participated in a prior "Live" study were chosen to pilot-test the MedlinePlus-supplemented exercises. Evaluation measures included student satisfaction (assessed using an 8-item pre- and posttest questionnaire), knowledge gained, and attitudinal changes (assessed with an abridged version of a previously developed "Live" questionnaire). Statistical analyses were performed using SPSS. RESULTS: Of 62 total study participants, 56 (92.3%) said that they were either "somewhat" or "clearly": (a) more likely to use MedlinePlus as a future source for answering questions about their personal health and (b) more knowledgeable about how eating habits can help prevent disease. Selected parameters were improved for nutrition knowledge (P < 0.01) and attitudes (P < 0.01) related to healthy eating. CONCLUSIONS: MedlinePlus has good potential for efficiently communicating trustworthy diet-related disease-prevention behaviors to adolescents in an existing classroom curriculum. [Abstract/Link to Full Text]

Banks DE, Shi R, Timm DF, Christopher KA, Duggar DC, Comegys M, McLarty J
Decreased hospital length of stay associated with presentation of cases at morning report with librarian support.
J Med Libr Assoc. 2007 Oct;95(4):381-7.
OBJECTIVE: The research sought to determine whether case discussion at residents' morning report (MR), accompanied by a computerized literature search and librarian support, affects hospital charges, length of stay (LOS), and thirty-day readmission rate. METHODS: This case-control study, conducted from August 2004 to March 2005, compared outcomes for 105 cases presented at MR within 24 hours of admission to 19,210 potential matches, including cases presented at MR and cases not presented at MR. With matching criteria of patient age (+/- 5 years), identical primary diagnosis, and secondary diagnoses (within 3 additional diagnoses) using International Classification of Diseases (ICD-9) codes, 55 cases were matched to 136 controls. Statistical analyses included Student's t tests, chi-squared tests, and nonparametric methods. RESULTS: LOS differed significantly between matched MR cases and controls (3 days vs. 5 days, P < 0.024). Median total hospital charges were $7,045 for the MR group and $10,663 for the control group. There was no difference in 30-day readmission rate between the 2 groups. DISCUSSION/CONCLUSION: Presentation of a case at MR, followed by the timely dissemination of the results of an online literature review, resulted in a shortened LOS and lower hospital charges compared with controls. MR, in association with a computerized literature search guided by the librarians, was an effective means for introducing evidence-based medicine into patient care practices. [Abstract/Link to Full Text]

Lee P, DiPersio D, Jerome RN, Wheeler AP
Approaching and analyzing a large literature on vancomycin monitoring and pharmacokinetics.
J Med Libr Assoc. 2007 Oct;95(4):374-80. [Abstract/Link to Full Text]

Hill T
Fear, concern, fate, and hope: survival of hospital libraries.
J Med Libr Assoc. 2007 Oct;95(4):371-3. [Abstract/Link to Full Text]

Rockoff ML, Cunningham DJ, Ascher MT, Merrill J
Information outreach to a local public health department: a case study in collaboration.
J Med Libr Assoc. 2007 Jul;95(3):355-7. [Abstract/Link to Full Text]

Charbonneau DH, Marks EB, Healy AM, Croatt-Moore CF
Collaboration addresses information and education needs of an urban public health workforce.
J Med Libr Assoc. 2007 Jul;95(3):352-4. [Abstract/Link to Full Text]

Ryan JL
AZHealthInfo: a collaborative model for supporting the health information needs of public health workers, public librarians, consumers, and communities in Arizona.
J Med Libr Assoc. 2007 Jul;95(3):349-51. [Abstract/Link to Full Text]

Coady TR, Willard GK
Unlocking the power of electronic health information for public health workers in Kansas.
J Med Libr Assoc. 2007 Jul;95(3):347-8. [Abstract/Link to Full Text]

Eldredge JD, Carr RD
Public health informatics training in New Mexico.
J Med Libr Assoc. 2007 Jul;95(3):343-6. [Abstract/Link to Full Text]

Bobik CD
Healthy Schools Florida.
J Med Libr Assoc. 2007 Jul;95(3):340-2. [Abstract/Link to Full Text]

Harger NE, Martin ER
The CATCH project: central mass access to child health information.
J Med Libr Assoc. 2007 Jul;95(3):337-9. [Abstract/Link to Full Text]

Dutcher GA, Spann M, Gaines C
Addressing health disparities and environmental justice: the National Library of Medicine's Environmental Health Information Outreach Program.
J Med Libr Assoc. 2007 Jul;95(3):330-6.
PURPOSE: Disparities in health between minority and majority populations have become a topic of high interest in the health care and information communities. This paper describes the National Library of Medicine's (NLM's) oldest outreach program to a minority population, a project that has been going on for over fifteen years. SETTING/PARTICIPANTS/RESOURCES: The overview is based on internal documentation and reports, interviews, personal communications, and project reports. BRIEF DESCRIPTION: This is a historical overview of the Environmental Health Information Outreach Program, from its beginnings in 1991 as the Toxicology Information Outreach Project. The initial collaboration began with nine historically black colleges and universities (HBCUs) that had graduate programs in biomedicine. The current program includes representation from HBCUs, institutions serving Hispanic students, and tribal colleges. In addition to working with these institutions to promote the use of and access to electronic health information and related technology, this program brings attention to scientific research related to health issues that disproportionately affect minorities. RESULTS/OUTCOME: The program expanded due to its perceived success by the initial participants and NLM's management. Not only have faculty, staff, and students at the participating institutions received training in using NLM's toxicology, environmental health, and other electronic resources, but the participants ascribe other successes to their collaboration with NLM. [Abstract/Link to Full Text]

Dancy NC, Dutcher GA
HIV/AIDS information outreach: a community-based approach.
J Med Libr Assoc. 2007 Jul;95(3):323-9.
OBJECTIVE: The paper provides an overview of the National Library of Medicine's (NLM's) AIDS Community Information Outreach Program during the years 1994 to 2005, discusses the impact of previously funded projects, and explores future implications for HIV/AIDS information outreach to communities in need. METHODS: A qualitative assessment was conducted to provide information on the impact of projects funded by the AIDS Community Information Outreach Program during fiscal year 2002. Interviews were conducted and final reports were analyzed, resulting in themes based on roles and responsibilities of participants and the impact of the projects in the communities. RESULTS: Results from the assessment suggest that access to HIV/AIDS information led to improved communication between patients and their health care providers and encouraged better health care decision making. Feedback from reports and interviews included examples of impact such as an increase in services provided to communities, national and global recognition of HIV/AIDS services, sustainability of projects, and improved communication. CONCLUSION: Community-based health information outreach projects may empower the HIV/AIDS community to become more involved in health care and improve communication with providers. NLM will continue to promote the AIDS Community Information Outreach Program to encourage community organizations to design local projects for their specific communities. [Abstract/Link to Full Text]

Arnesen SJ, Cid VH, Scott JC, Perez R, Zervaas D
The Central American Network for Disaster and Health Information.
J Med Libr Assoc. 2007 Jul;95(3):316-22.
PURPOSE: This paper describes an international outreach program to support rebuilding Central America's health information infrastructure after several natural disasters in the region, including Hurricane Mitch in 1998 and two major earthquakes in 2001. SETTING, PARTICIPANTS, AND DESCRIPTION: The National Library of Medicine joined forces with the Pan American Health Organization/World Health Organization, the United Nations International Strategy for Disaster Reduction, and the Regional Center of Disaster Information for Latin America and the Caribbean (CRID) to strengthen libraries and information centers in Central America and improve the availability of and access to health and disaster information in the region by developing the Central American Network for Disaster and Health Information (CANDHI). Through CRID, the program created ten disaster health information centers in medical libraries and disaster-related organizations in six countries. RESULTS/OUTCOME: This project served as a catalyst for the modernization of several medical libraries in Central America. The resulting CANDHI provides much needed electronic access to public health "gray literature" on disasters, as well as access to numerous health information resources. CANDHI members assist their institutions and countries in a variety of disaster preparedness activities through collecting and disseminating information. [Abstract/Link to Full Text]

Cogdill KW, Ruffin AB, Stavri PZ
The National Network of Libraries of Medicine's outreach to the public health workforce: 2001-2006.
J Med Libr Assoc. 2007 Jul;95(3):310-5.
OBJECTIVE: The paper provides an overview of the National Network of Libraries of Medicine's (NN/ LM's) outreach to the public health workforce from 2001 to 2006. DESCRIPTION: NN/LM conducts outreach through the activities of the Regional Medical Library (RML) staff and RML-sponsored projects led by NN/LM members. Between 2001 and 2006, RML staff provided training on information resources and information management for public health personnel at national, state, and local levels. The RMLs also contributed significantly to the Partners in Information Access for the Public Health Workforce collaboration. METHODS: Data were extracted from telephone interviews with directors of thirty-seven NN/LM-sponsored outreach projects directed at the public health sector. A review of project reports informed the interviews, which were transcribed and subsequently coded for emergent themes using qualitative analysis software. RESULTS: Analysis of interview data led to the identification of four major themes: training, collaboration, evaluation of outcomes, and challenges. Sixteen subthemes represented specific lessons learned from NN/LM members' outreach to the public health sector. CONCLUSIONS: NN/LM conducted extensive information-oriented outreach to the public health workforce during the 2001-to-2006 contract period. Lessons learned from this experience, most notably the value of collaboration and the need for flexibility, continue to influence outreach efforts in the current contract period. [Abstract/Link to Full Text]

Cahn MA, Auston I, Selden CR, Cogdill K, Baker S, Cavanaugh D, Elliott S, Foster AJ, Leep CJ, Perez DJ, Pomietto BR
The Partners in Information Access for the Public Health Workforce: a collaboration to improve and protect the public's health, 1995-2006.
J Med Libr Assoc. 2007 Jul;95(3):301-9.
OBJECTIVE: The paper provides a complete accounting of the Partners in Information Access for the Public Health Workforce (Partners) initiative since its inception in 1997, including antecedent activities since 1995. METHODS: A descriptive overview is provided that is based on a review of meeting summaries, published reports, Websites, project reports, databases, usage statistics, and personal experiences from offices in the National Library of Medicine (NLM), six organizations that collaborate formally with NLM on the Partners initiative, and one outside funding partner. RESULTS: With ten years of experience, the initiative is an effective and unique public-private collaboration that builds on the strengths and needs of the organizations that are involved and the constituencies that they serve. Partners-supported and sponsored projects include satellite broadcasts or Webcasts, training initiatives, Web resource development, a collection of historical literature, and strategies for workforce enumeration and expansion of public health systems research, which provide excellent examples of the benefits realized from collaboration between the public health community and health sciences libraries. CONCLUSIONS: With continued funding, existing and new Partners-sponsored projects will be able to fulfill many public health information needs. This collaboration provides excellent opportunities to strengthen the partnership between library science and public health in the use of health information and tools for purposes of improving and protecting the public's health. [Abstract/Link to Full Text]

Humphreys BL
Building better connections: the National Library of Medicine and public health.
J Med Libr Assoc. 2007 Jul;95(3):293-300.
PURPOSE: The paper describes the expansion of the public health programs and services of the National Library of Medicine (NLM) in the 1990s and provides the context in which NLM's public health outreach programs arose and exist today. BRIEF DESCRIPTION: Although NLM has always had collections and services relevant to public health, the US public health workforce made relatively little use of the library's information services and programs in the twentieth century. In the 1990s, intensified emphases on outreach to health professionals, building national information infrastructure, and promoting health data standards provided NLM with new opportunities to reach the public health community. A seminal conference cosponsored by NLM in 1995 produced an agenda for improving public health access to and use of advanced information technology and electronic information services. NLM actively pursued this agenda by developing new services and outreach programs and promoting public health informatics initiatives. METHOD: Historical analysis is presented. RESULTS/OUTCOME: NLM took advantage of a propitious environment to increase visibility and understanding of public health information challenges and opportunities. The library helped create partnerships that produced new information services, outreach initiatives, informatics innovations, and health data policies that benefit the public health workforce and the diverse populations it serves. [Abstract/Link to Full Text]

Bobal AM, Brown HL, Hartman TL, Magee M, Schmidt CM
Navigating the US health care system: a video guide for immigrant and diverse populations.
J Med Libr Assoc. 2007 Jul;95(3):286-9. [Abstract/Link to Full Text]

Ryce A, Dodson S
A partnership in teaching evidence-based medicine to interns at the University of Washington Medical Center.
J Med Libr Assoc. 2007 Jul;95(3):283-6. [Abstract/Link to Full Text]