Heartbleed forces you to do two things:
1) Fix the problem.
2) Learn how quickly you can fix unknown problems.
It took us about 3 days from being aware of heartbleed to fixing the problem. That's 2 days too long.
Many organisations are still struggling with this. One organisation has logged a request in their web developer's queue that will be addressed when it reaches the top of the queue. That might take weeks.
Some lessons to be learnt from this.
1) Don't ever claim you're completely secure or invulnerable to any hack. A critical problem can emerge without warning. It might be a security problem or a reliability problem. You can't minimize the risk, but not remove it. Your community can be hacked or go down at any time.
2) You need more than one person who can make technical changes. If you're reliant on one person, as we were, it becomes difficult when this person is overwhelmed by dozens of requests at once. You need a second person who can respond to these issues.
3) Announce solutions before you announce problems. Announcing your community is vulnerable before you have fixed the problem is counter-intuitive.
4) Save your member's attentions for the big issues. Members give you a limited amount of attention. You can easily react to one of hundreds of potential hacks or privacy issues a day. That's not reasonable. Eventually members stop looking for the wolf. Save their limited attentions for the really major issues. This sometimes means making risky calculations of odds.
Heartbleed is a useful opportunity to improve your react times to major issues, make changes, and communicate those changes.
…and if you are going to communicate those changes, be sure you fully understand the problem.