Theory vs. Practice
Diagnosis is not the end, but the beginning of practice.
› What does "provably secure" mean?
You might have read in Academic papers that an encryption algorithm has been formally demonstrated to be "provably secure".
Unfortunately, in the Academic jargon, it does not means that this algorithm cannot be broken – it merely means that its level of security has been verified to be the one supported by its underlying method (that's a lot of unchecked assumptions).
As a result, the same claim can be used for utter crap and genuine marvel, in the same sentence, and without contradiction.
We will expose here the second category, as there's plenty of representations of the former kind widely published everywhere.
What makes something "secure"?
The total lack of uncertainty (and therefore, the demonstrated lack of assumptions).
Very few things are considered "secure" because the sun can blow us all without notice, because the ground can melt in a volcanic eruption, and because each of us can die from a heart attack – at any moment.
So what makes something "secure" must be independent from the known and unkown laws of the universe – and, more generally, anything that we cannot control, like exterior conditions.
And this is not as difficult as it sounds.
For example, one cannot control the input of end-users (or robots) in a software program, but it is perfectly feasible to avoid the conditions that would make the program vulnerable to junk (either accidental or malicious).
This process of excluding conditions is often implemented as a long list of empirically-collected rules during past experiments. This can be, when mature enough, be formally demonstrated as "secure" (that is, matching the premises of the defined environment).
But there is another way. Far faster to implement and use and which leaves far less room for human errors involved in the collection, processing and verification of a "long list of rules" or in the definition of a "relevant environment" – not to mention the desirable prospect of avoiding the overhead penalty of enforcing a "long list of rules".
This other way consists in using, from the very beginning, a method which leaves no room for uncertainty.
Then, no patches are needed. There's no reason to worry about this specific problem because it just cannot take place.
And since there are gazillions of other more interesting problems to resolve, excluding by-design those categories of problems that can be resolved by merely using sane practices makes a lot of sense.
From Theory to Practice
From encryption algorithms to protocol parsers, and even social systems, there is room for undisputable certainty, that is, initial choices that make unexpected events harmless (I strongly believe that this is the goal of a civilization to make progress in this matter).
Someone telling you the contrary either doen't know a solution – or has interests conflicting with your ability to benefit from certainty.
In both cases, either hiring an ignorant or a traitor is probably not the best way to making any progress on "security", and certainly not towards anything "provably secure".
Now, before myriads of ignorants or traitors invade my mailbox, it is time to provide ground for these claims.
It's always better to be judged on one's acts rather than on one's talks. And the publicly available part of this work is called G-WAN.
Since 2009, the year of its first public release, not a single vulnerability has been found in G-WAN's HTTP parsing. And, guess what, this is for a damn good reason called "certainty".
As a side-effect of this design choice, G-WAN (at the time we write these lines) is also faster than all other servers. Proof that "certainty" needs not to compromize execution speed.