Drug Truths

A site devoted to teaching about drug discovery and development.

Archive for July 2011

The FDA Must Review Safety AND Efficacy of New Drugs

with 4 comments

A recent opinion piece in The Wall Street Journal (7/25/2011) by Michele Boldrin and S. Joshua Swamidass called “A New Bargain for Drug Approvals” lobbies for a fundamental change in FDA oversight for new drug approvals.  Basically, they are advocating for a system that puts safety first and allows for the proof of effectiveness later.  Specifically, the authors are proposing that the FDA “should return to its earlier mission of ensuring safety and leaving proof of efficacy for post-approval studies and surveillance.  It is ensuring the efficacy – not the safety – of drugs that is most expensive, time-consuming and difficult.”

What is their reason for proposing such a change?  Among other things, they are concerned that the necessity of proving a drug’s efficacy, which is the primary driver for the approximately $1 billion price tag in developing a drug, is limiting the number of compounds that the biopharmaceutical industry can advance through FDA approval.  The authors believe that if a company only had to invest in showing that a compound was safe, the entire process would be cheaper and then companies would be able to have the funds to “unleash the next wave of medical innovations.”

They go on to propose that: “In exchange for this simplification, companies would sell medications at a regulated price equal to the total economic cost until proven effective, after which the FDA would allow the medications to be sold at market prices.”

There are a variety of obstacles that such a proposal would face.  First of all, there is a reason that the FDA’s remit was expanded decades ago.  In evaluating a medicine, you need to be able to put the safety profile of a new drug in perspective with its benefit profile.  A breakthrough cancer drug with some serious side effects might be justifiably approved because of its life-saving effects, whereas a compound that lowered LDL but which has this same side-effect profile might be deemed unapprovable based on the benefit-risk profile.  You need to have full safety and efficacy data to judge a new drug fully.

The second major issue is economic.  Drug prices are already regulated throughout the rest of the world and will likely be regulated to a certain extent in the US at some point in the future.  In addition, payers (HMOs, etc.) want to know the safety and efficacy benefits of a new drug before they will allow it to be prescribed in their health networks.  Thus, having efficacy data is needed to help get a new drug prescribed as well as to get a reasonable price.  Without having full efficacy data, the drug will not be made broadly available to patients.

Finally, and maybe most importantly, why would a physician prescribe a new drug without having a full understanding of the beneficial effects of such a compound as well as knowing how the new drug stacks up in comparison to the treatments that she/he is already using?  Just because it may be deemed safe isn’t a good enough reason to have a physician give it to patients.  Even when you have these data now, physicians are reluctant to try a new medicine over something that they have already had good patient success with.

Call me old-fashioned, but I like having the FDA being the independent evaluator of both safety and efficacy.  Such approval gives doctors and patients confidence in the health care system.  Abandoning this practice would be a big mistake.


Written by johnlamattina

July 28, 2011 at 3:08 am

The Real Hurdle in Discovering New Medicines

leave a comment »

There are many self-styled experts who think they know how to improve the productivity of biopharmaceutical R&D organizations.  They believe that by having smaller organizations, or building external networks, or having great technologies, or having biomarkers of diseases,  or outsourcing large parts of your operations, you can improve R&D productivity.  All of these things may indeed help gain efficiencies.  But there is no guarantee that they will lead to new treatments.  These are just ways to help a company to execute better.  They do not solve the core problem: running the key clinical trial to prove or disprove your hypothesis to treat a disease with your new drug.  No matter how good you think your preclinical science is, the ultimate test of your hypothesis occurs in long-term clinical trials.  And these trials ALWAYS yield surprises which many times unfortunately are negative.  Here are two such examples.

Nerve growth factor (NGF) is a protein that modulates pain through sensitization of neurons.  Multiple studies in animal models of pain show that NGF can both cause and augment pain.  Furthermore, blocking  NGF alleviates pain.  About a decade ago, scientists at Rinat, a biotech company since acquired by Pfizer, developed an antibody to NGF called tanezumab.  Tanezumab worked extremely well in animal models of inflammation and so an obvious path for clinical study was to treat painful osteoarthritis of the knee, a poorly served condition.

The initial results with tanezumab were extremely exciting.  Patients, for whom pain medications no longer worked and for others who had been recommended for a knee replacement, suddenly felt great.  Given that the sole biological role for tanezumab was to bind to NGF and prevent its harmful effects, it was thought that this antibody would have a great safety profile as it was a very targeted drug.  However, tanezumab’s downfall proved to be the great activity it led to.  Patients were feeling so good that their rejuvenated active lifestyle resulted in worsening arthritis in the knee and, in some cases, total knee replacement.  It turns out that complete elimination of pain in these patients is not a good thing as pain serves as a warning sign that damage is occurring.

There are other pain indications where tanezumab may prove useful.  One is in cancer pain and here the risk-benefit profile of this antibody may prove to be of clinical significance.  Pfizer is now studying tanezumab for this indication.  But these studies will take time and cost tens of millions of dollars more.  And there is no guarantee that this effort will be successful.

The explosive increase in the incidence of type 2 diabetes has been discussed in this blog and in numerous other articles.  The need for new medications is acute, especially in light of the safety difficulties encountered with two commonly used marketed agents, Avandia and Actos.  Thus, many people were closely following the progress of dapagliflozin, a totally new approach in treating this disease.  This compound, being jointly developed by Bristol-Myers Squibb and AstraZeneca, is the first of a new class of compounds called SGLT2 inhibitors which lower blood sugar by causing it to be excreted in the urine.  Furthermore, dapagliflozin also caused a small but significant drop in body weight, another important risk factor in this population.

Last week, an FDA Advisory Committee voted against the approval of dapagliflozin to treat diabetes.  In a two year study of this drug in diabetics, roughly 0.4% of women got breast cancer as compared to 0.1% of women in the control group.  There was also an increase in bladder cancer in men (0.3% on drug vs. 0.05% in controls).  This risk-benefit profile was not deemed acceptable to the committee.  Is this problem related to the mechanism of SGLT2 inhibition, or is it limited to an unknown property of dapagliflozin?  No one knows and this likely won’t be known unless another biopharmaceutical company studies a different inhibitor and shows that it doesn’t share this problem.

Both the NGF and SGLT2 programs involved cutting edge science.  Both programs were carried out by organizations with great experience in their respective fields.  Both sought to develop new medicines to meet a major medical need.  They didn’t fail because of lack of effort, resources or talent.  They failed because the ultimate proof of a medical hypothesis in drug R&D does not occur in the laboratory but rather in long term clinical trials, where the complexities of human biological pathways are still not well understood.  These trials are long and expensive – and they often fail.

Pharmaceutical R&D is a high risk, high reward enterprise.  There are no easy pathways to getting a major new medicine approved.  The FDA has already approved more new drugs in 2011 than it had all of last year.  Furthermore, there are great compounds in late stage development across the industry.  That’s the good news.  The sad news for patients, however, is that unfortunately a good number of these will fail for reasons like those above.  Outsourcing R&D, mergers, reorganizations, etc. won’t change that.

Written by johnlamattina

July 26, 2011 at 1:04 am

The Death of the Blockbuster Has Been Greatly Exaggerated

with 7 comments

Over the past few years, a number of critics of biopharmaceutical companies have predicted the demise of the industry because of its dependence on blockbusters.  A blockbuster is defined as a branded prescription drug that generates annual revenues of $1 billion or more.  Discovering a blockbuster should be a good thing as it is a medicine that is prescribed to millions of people because of its beneficial effects on disease and suffering.  However, many major blockbusters, like Zoloft, Lipitor and Fosamax, have already lost or are about to lose their patent protection and it is thought that there is a dearth of new compounds in the drug makers’ pipelines with blockbuster potential to take the place of older products.

A few years ago, no less than the former head of the FDA, Dr. David Kessler, slammed the blockbuster mentality saying: “The model that we’ve based pharmaceutical development on the past ten years is simply not sustainable.  The notion that there are going to be drugs that millions of people can take safely, the whole notion of the blockbuster, is what has gotten us into trouble.”  Melody Petersen was even more strident in an opinion piece titled “A Bitter Pill” that appeared in The Los Angeles Times in 2008: “For 25 years, the drug industry has imitated the basic business model of Hollywood.  Pharmaceutical executives, like movie moguls, have focused on creating blockbusters.  They introduce products that they hope will appeal to the masses, and then they promote them like mad.”

Sorry.  It’s hard for me to envision my old boss, former Pfizer CEO, Hank McKinnell “taking a lunch” to discuss strategy with the heads of Paramount and Twentieth Century Fox.

First, it must be pointed out that a company doesn’t set its research priorities based on whether or not a program can eventually yield a blockbuster.  Such predictions are difficult, if not impossible.  For a new medicine to be successful, it must be safe, effective and meet a major medical need.  Assuming that 15 years after starting a new R&D program, the new compound finally gets approved, it then needs to get a favorable label from the FDA, reasonable pricing from those who reimburse drug costs, and acceptance by physicians and patients.  A great example in the difficulty of predicting blockbusters, interestingly enough, is Lipitor.  When Warner-Lambert was seeking a partner to help sell and market what proved to be the biggest selling drug of all time, the company approached Pfizer.  The Pfizer marketing team’s analysis said that the peak sales potential of this medicine would be $800 million – a significant amount, but not exactly blockbuster territory.  However, the actual peak in worldwide sales for Lipitor was in excess of $13 billion.   What the marketing team did not anticipate were the results of the long-term studies with Lipitor, completed some years later, which showed the importance of the value of this compound in preventing heart attacks and strokes.

Nevertheless, one might think this point is moot.  Based on the pronouncements of the doomsayers, one would think that the industry has lost the capability to produce major new products.  However, two reports that appeared last week indicate that this is not the case and further, suggest that the strategy of working on projects meant to discover compounds that meet major medical needs can still lead to blockbusters.  This was evident in two reports last week.  The first came from Jonathan Rockoff and Ron Winslow in The Wall Street Journal, an article highlighted in this blog last week.  Although Rockoff and Winslow focused on the increase in FDA approvals likely to occur over the coming years, they had a table which highlighted a dozen exciting new medicines that have either been recently approved or are in late-stage development.   They also included predicted peak sales of these compounds, which ranged from $1.1 – $4.3 billion, annually – blockbusters all.

A second paper, “The New Face of Blockbuster Drugs” by Elizabeth Schwarzbach, Ilan Oren and Pierre Jacquet in “IN VIVO: The Business and Medicine Report” more than affirms the views in The Wall Street Journal article.  Their detailed analysis shows that there will be more blockbusters in 2015 than there had been 10 years earlier (132 vs. 101).  Furthermore, these will bring in on average 15% more revenue apiece.

This is very encouraging news.  Yet, when many of these programs started, it wasn’t clear if the compounds that were discovered would even work in the clinic, much less emerge as blockbusters.  So what has happened to create these potential blockbusters?  In the late 90s, industry leaders realized that for new products to be successful in the future, they had to address major medical needs or at least be highly improved over existing therapy.  R&D priorities were altered to reflect the needs of the future and
the products now emerging from R&D pipelines meet these needs.  Dr. Janet Woodcock, the FDA’s drug
division director, recently celebrated this revamped strategy: “ We’re seeing a lot of innovation, much more than in recent memory.”

There are still many diseases where new treatments are needed: obesity, Alzheimer’s disease, antibiotic resistant infections, etc.  The number of people affected by these diseases is so great that any new medicines that are shown to be safe and effective in the treatment of these diseases won’t just be a great benefit to patients around the world, they will also be blockbusters.

Written by johnlamattina

July 19, 2011 at 1:54 pm

Posted in Uncategorized

Tagged with , ,

How Much of a Pharma Company’s Pipeline Should be In-Licensed?

with 2 comments

I was asked  this question last week and I casually answered that a pharmaceutical company should generate about 33% of its pipeline through outside sources (in-licensing).  Actually,  I didn’t pull this number out of the blue.  Rather, it is derived from some observations and beliefs that have been generated over the past 30 years.

Why in-license anything at all?

1)      No matter how big your R&D organization is, no matter how capable it is, and no matter how smart your scientists are, it is impossible for one organization to corner the market on all the good ideas that are being worked on across the world.  You WILL miss opportunities.  To avoid missing out, it is important to be able to have the capacity to add such programs if they become available.  Some years ago, when novel anti-ulcer drugs like Zantac were in late development, Merck realized that these compounds would be important new medicines.  As it didn’t have its own internal program, it licensed two important anti-ulcer medications: Pepcid (famotidine) and Prilosec (omeprazole).  In fact, in the case of the latter, Merck did state of the art toxicology studies to help get this compound approved, thereby showing the added value that can be delivered by a partner with a strong R&D capability.

2)      You may have recognized that a potential new breakthrough existed in a hot therapeutic area, but despite committing a good deal of time and effort, your own internal efforts failed.  If having a new medicine in this area makes strategic sense for your company, it would behoove you to try to license a promising agent from a company looking for a partner.  This is what happened when Warner-Lambert Parke-Davis signed a co-marketing agreement with Pfizer for Lipitor.  The result proved historic.

3)      Perhaps your own internal program has had some success and your compound is looking good in mid-stage clinical trials.  However, a competitor’s compound is 1 – 2 years ahead of your own.  Furthermore, your own compound looks pretty similar in terms of its clinical profile to the competitor’s.  It might make business sense for the two companies to link-up both programs in such a way that one is the lead and the second one serves as a back-up.  Why would the lead company do such a deal?  For one thing, should a problem crop up with the lead compound, you now have an alternative.  In addition, the deal can be constructed in such a way that expensive clinical trial costs are shared.  In addition, you would now have two shots in this arena instead of one.  This was the situation when BMS sought a partner for its anti-clotting agent, apixaban.  While Pfizer had its own compound in development, apixaban was at least 12 months ahead and the Pfizer compound didn’t have any apparent advantages.  Joining forces maximized the use of resources for both companies.

This all sounds pretty good. Why not in-license 50 – 60% or more of your pipeline?

1)      This is sort of like being dependent on foreign oil.  Your pipeline is your life blood, your future.  To depend on outsiders to supply it for you is a mistake.  For one thing, you are never assured that what you need will be out there.  Furthermore, you can’t be guaranteed that you won’t be outbid by someone else for what you want.

2)      If you in-license a compound that makes it to market, in the best case you are paying a significant royalty.  In the worst case, you are getting only 30 – 40% of sales.  Clearly, you don’t get as significant a return on investment for an in-licensed compound than one generated internally.

3)      Most importantly, you need a significant in-house R&D group to be able to not just help evaluate potential in-licensed compounds, but also to be viewed by a prospective partner as an organization that will bring added value to its discovery.

 One might argue with the 33% figure.  Maybe it should be 30%.  Maybe it should be 45%.  The bottom-line is that it should be a significant part of one’s strategy.  But it shouldn’t be the majority.  The pipeline is a company’s lifeblood and internal R&D must drive it.

Written by johnlamattina

July 15, 2011 at 2:16 pm

Posted in Uncategorized

Tagged with ,

Pharma R&D Productivity: Have They Suddenly Gotten Smarter?

with 3 comments

For the past few years, critics of the pharmaceutical industry have been very negative about the productivity of these companies.  The declining number of FDA approvals has only supported this view.  From 1990 – 1999, the FDA approved an average of 31 drugs per year.  In the next ten years, this number dropped to 24.  The outlook didn’t get any better in 2010, which saw only 21 FDA approvals.  Thus, yesterday’s front page story in The Wall Street Journal is stunning: “Drug Makers Refill Parched Pipelines.”

Huh? Have authors Rockoff and Winslow been sampling hallucinogens?

Not quite.  The data they present in the article is certainly encouraging.  The story has its roots in last week’s testimony by FDA Drug Division Director Dr. Janet Woodcock, who told Congress that the FDA has already approved 20 innovative medicines this year that “work differently or better than existing drugs or tackle ailments lacking good treatments.”  She went on to say: “We’re seeing a lot of innovation, much more than in recent memory.”  Dr. Woodcock is talking about the same pharmaceutical industry R&D engine that impatient critics have deemed to be broken.  According to a graph in The Wall Street Journal article, Rockoff and Winslow predict that drug approvals in the 2010 – 2019 period will exceed anything that the industry has ever produced.  If this is true, what a blessing this news would be for patients around the world.

Unfortunately, The Wall Street Journal authors are off in their assessment of the causes behind this productivity jump.   To quote:  “Today’s new drug output appears to mark the beginnings of a payoff from a research reorientation the industry began undertaking several years ago.”  Actually, the productivity surge, if it follows the projected path, is the result of research done in biotech and pharmaceutical laboratories in the 1990s.

To support their argument, Rockoff and Winslow cite 12 compounds in a table labeled, “Novel drugs recently approved and in the pipeline.”  The very first compound in the table, Benlysta (for treating Lupus), was discovered by Human Genome Sciences (HGS) and jointly developed with GlaxoSmithKline (GSK).  HGS started the discovery program that led to Benlysta in 1996, started clinical trials with this drug in 2001, and finally got approval this year.  The discovery program that uncovered tofactinib, Pfizer’s breakthrough drug for rheumatoid arthritis, started in 1994, and the New Drug Application (NDA) for this drug will be filed this year.  Assuming this is approved in 2012, this R&D program will have taken 18 years! While I cannot attest to each of the compounds enumerated by The Wall Street Journal, my guess is that, based on the length of time it takes to get a drug approved, most or all of them had their roots in programs that commenced in the late 1990s, not, as the authors suggest, in “research reorientation the industry began undertaking several years ago.”

If this is the case, what led to the surge that we are now seeing?  Two seismic changes occurred roughly 10 years ago that greatly impacted the industry’s productivity.  The first involved answering the question: “What value does this medicine bring over existing therapies?”  This answer was being demanded not just by payers but also by regulatory agencies.  Before 2000, these studies were generally done after a drug was approved – so-called Phase 4 studies.  However, in order to achieve a reasonable price for a new medicine, these studies needed to be included in the initial filing for approval.  The importance and costs of such studies cannot be underestimated.  In many cases, studies that measure the performance of a new drug in a real life situation are necessary.  For example, it was no longer important to show that a new medicine just lowered bad cholesterol.  It had to be proven that it could also reduce heart attacks and strokes.  These studies alone add 3 – 5 years to the clinical development program and literally hundreds of millions of dollars in costs.  Since many clinical programs were already underway in the mid-2000s, they had to be adjusted and so the programs took longer than originally planned.

In addition, by the mid-2000s, far more safety data was being required by the FDA than ever before.   For example, the older NSAID pain relievers like naproxen and ibuprofen had little long-term patient exposure at the time their respective NDAs were approved.  To put it into perspective, all that was required in the past was a study showing safety in patients exposed to a drug for 90 days.  Now, the FDA won’t approve any pain reliever without patients being treated for at least a year (and more likely three years) with a study that also measures impact on overall health outcomes.

Are these important changes?  Absolutely, as any drug that can get through these hurdles will, as Dr. Woodcock said, “work better than existing treatments.” However, companies had to adjust to these changes in the last decade, and this drove up costs, caused development programs to take longer and also resulted in more late-stage failures as compounds that were safe and effective might not have been as effective as existing, cheaper treatments and thus not commercially viable.

Given all of this information, I hope that the industry has adapted, development programs and timelines have been adjusted and we can now expect a steady stream of new medicines.  Yet, three issues temper my hope.  First, the industry has seen such an increase in productivity before.  This happened in the mid-90s when Congress enacted the Prescription Drug User Fee Act (PDUFA).  The FDA was grossly understaffed in the 1990s and, as a result, many compounds languished awaiting approval while they were being approved in Europe, oftentimes years before they would be available in the US.  Congress was outraged by this.  When the FDA showed data that indicated how understaffed it was, Congress’s solution was the PDUFA, which essentially charged a company a fee when it filed an NDA, then used the revenues generated to hire more FDA reviewers.  This action resulted in the FDA being able to review dossiers more rapidly which led to the removal of the logjam and more medicines being approved than any time before or since.  Has the adaption of the industry to the new NDA expectations led to a similar increase of compounds being approved? We won’t know for five years.

Second, the consolidation of the industry in the last 15 years has raised havoc with R&D organizations.  It is my firm belief that mergers are particularly difficult for R&D as starting/stopping research programs takes time.  I once heard a Nobel Prize winner in Medicine say that it takes at least three years for a professor, who leaves one university for another, to get his laboratory back running at full speed.  It is a lot quicker in the industry, but it still takes time.

Finally, the cuts that have been made across the board in R&D in many companies will have an impact going forward.  This effect will not be seen in the short term.  As was detailed above, discovery-development programs for successful new medicines take over a decade.  However, the turmoil in R&D of the recent past (mergers, reorganizations, site-closures, new business models, etc.) will be felt in the next decade.

I am really hoping that Rockoff and Winslow are correct and we are about to see the approval of hundreds of new medicines in this decade.  But the question remains: is this a five-year blip or a sustainable trend?

Written by johnlamattina

July 12, 2011 at 1:54 pm

How Does R&D Deal With the Explosion in the Incidence of Diabetes?

with 3 comments

A recent story in Lancet, a British medical journal, showed that since 1980 the incidence of type 2 diabetes has doubled globally to 347 million and tripled in the United States, where the Center for Disease Control (CDC) estimates that 1 in 12 Americans have this disease.  Instead of triggering alarms among those focused on health care issues, this scientific paper seems to have received only modest attention.  Gautam Naik did cover it in the Wall Street Journal and his quote from one of the study’s authors, Professor Majid Ezzati, was on the mark: “Diabetes is a long-lasting and disabling condition, and it’s going to be the largest cost for many health care systems.”

It is not as if this epidemic has a mysterious cause.  The growth in the incidence of type 2 diabetes is directly related to obesity. The CDC has been following the girth of America for the past two decades and the results are startling (www.cdc.gov).  There are now nine states in this country where one-third of the population has a body mass index of over 30, or, in other words, is categorized as obese (to put this in perspective, a 5’9’’ man who weighs 210 pounds has a BMI of 30).  As people get heavier, the cells in their body are less able to utilize insulin.  This inability leads to increased levels of sugar in the blood stream, which, if left untreated, can result in vascular complications leading to heart disease, kidney failure and blindness.

The obvious solution to this problem is getting people to exercise more, eat less and embrace a healthier lifestyle.  While admirable efforts to achieve these goals are being made on multiple fronts, they aren’t working.  Thus, people are going to need to have the option of drug therapy to help alleviate the symptoms and ward off the deleterious downstream effects of diabetes.  But this solution isn’t so straightforward.  While there are drugs currently available to treat diabetes, their effects are modest.  Furthermore, two anti-diabetic drugs, Avandia and Actos, have been found to be deficient in terms of their risk-benefit profile; as a result, Avandia is no longer prescribed and Actos’ use is highly limited.

Equally as concerning is the fact that the R&D pipeline of potential new anti-diabetic drugs is not impressive.  While great drugs have been found to treat high blood pressure and to lower cholesterol, diabetes drug discovery has proven to be much more difficult.  New drugs for diabetes must be viewed as a major priority for all involved in the discovery and development of new medicines.  As such, R&D in this area needs to be given a much higher priority, particularly by governmental agencies.  Here are a few suggestions:

1)      More needs to be invested in basic research – The NIH sets the national health priorities by where it invest its budget.  Of its $32 billion projected research budget for 2012, the NIH only allotted $1.06 billion to diabetes research.  In contrast, the cancer budget (across all of the various cancers such as lung, breast, etc.) comes to $8.15 billion.  The investment  in cancer research by the NIH over the past three decades has helped to produce a spectacular pipeline of almost 1,000 new anti-cancer agents currently in development, certainly indicative of the priority this area of research has received.  But even the infectious diseases budget is 4 times that of diabetes.  Given the enormous prevalence of diabetes, perhaps some redistribution of NIH funds is in order so that more research into understanding diabetes disease mechanisms can be generated.

2)      The FDA needs to be more creative in clinical trial paradigms – the FDA has been under siege recently because of its approval, then subsequent withdrawal of diabetes drugs.  As a result of the Avandia and Actos incidents, any new drug to treat diabetes is now required to complete an outcome study  (in which patients are studied while on drug for three years and events such as heart attacks, strokes, etc. are measured)  before it is reviewed by the FDA.  Studies like these cost hundreds of millions of dollars and, although patients are on the drug for three years, these studies take five years to complete given their complexity.  In order to discover a truly novel breakthrough drug for diabetes, the FDA needs to be flexible in its approval requirements.  Given the epidemic nature of diabetes, the FDA needs to address the disease with the same urgency it attacks AIDS and cancer.  It should use its approach to AIDS and cancer trials as a blueprint for its treatment of diabetes, perhaps even reducing the initial outcome study to a year instead of three years.  If this study is successful, the drug can be approved by the FDA with the proviso that a three year study will begin immediately on approval.3)      Pharmaceutical companies need to rededicate themselves to this area of research – there is no doubt that R&D in this area is risky.  Novel mechanisms are speculative to work on and the clinical trials are difficult, particularly if you are trying to measure the impact of an experimental drug on diabetic complications like kidney disease or retina degradation.  Furthermore, there is risk involved when it is unclear how difficult the regulatory pathway is going to be.  But big pharma needs to rise to the challenge here as they are the ones whose experimental drug will prove or disprove much of the early hypothetical work coming from academia and the NIH.

The cost to society from the diabetes epidemic is going to be huge.  Changes in how we approach the discovery of new anti-diabetics have to occur now in order to have an impact in the next decade.

Written by johnlamattina

July 5, 2011 at 8:26 pm

Posted in Uncategorized