Feedback

Theory vs. Practice

Diagnosis is not the end, but the beginning of practice. Martin H. Fischer


How G-WAN went from 850k RPS (in 2012) to 242m RPS (in 2025)

Wonder why China leads BigTech? U.S. and European products are all the same: not performing, nor safe, not innovating – yet a few vendors get all the business. Copy & paste has replaced R&D (for the sake of infinite debt-financing growth). Jobs are disappearing, and the ones that remain are boring. I will show here how we can (and therefore should) do much better.

After 45 years of engineering, I have seen a lot of organizations, platforms, people and programs. I always felt there was a fundamental difference between people (and therefore their speeches and works). I believe that it explains how G-WAN has evolved while all others have stagnated.

In 2009, I wrote G-WAN because none of the available HTTP servers was matching my needs. When I had something to publish (faster, simpler, more reliable) I shared my work as a freeware with my views about what was (and still is) wrong elsewhere.

G-WAN is 453 times faster than NGINX (uncached 100-byte file, Intel Core i9 CPU)

In 2025, G-WAN (242m RPS) is 453 times faster than NGINX (555k RPS) with 10k users, an uncached 100-byte file, on an Intel Core i9 CPU. With such energy and hardware specs, my $1.5k PC is a Cloud.

In 2025, Wikipedia states that Google uses 2.5 million servers to serve an estimated 40m searches per second – multiplied by 5 as "4 parts responds to a part of the request, and the GWS assembles their responses and serves the final response to the user" said Google in a 2003 report.

200m RPS (5 * 40m RPS) served by 2.5m servers (200m / 2.5m = 80 RPS per server. The potential energy and hardware costs are gigantic, hence the GAFAM now buying nuclear plants to cope with the increasing traffic and operating costs!

Who needs scalability? Startups? Internet, Phone & TV networks? Data centers, Web hosting and Could operators? Video streaming platforms? Payment platforms? Social networks? The GAFAM (operating systems, Web browsers, search engines)? Government administrations? Stock exchanges? Clearing houses? Banks?

Already we can see that some of these players have an incentive at promoting inefficiency (and censoring efficiency) to preseve (or grow) their revenues – at the expenses of their customers (the largest of all being governments, that is, taxpayers).

If you can't (or don't want to) buy nuclear plants, there's G-WAN.

In 1979, when I started programming in asm, MS-DOS did not exist. End-users, large and small, naturally spent their money only on good things (life is too short to waste time on junk).

In 2009, while I ported G-WAN from Windows (1993) to Linux (1991), G-WAN was my first Linux program. So, I was looking at the source code of some programs (like NGINX) to find what Linux system calls were needed, and how to use them.

  I was stunned by the NGINX source code exceptions handling so many bugs and incompatibilities of GNU LibC and Linux,
  and how NGINX forced end-users to set obscure system options in configuration files, instead of doing it correctly in its code.

Seeing this, I assumed that, given their age, the Linux APIs used by G-WAN (epoll, pthreads, etc.) would be stable (and their bugs fixed) so that G-WAN would run fine in the foreseeable future. That was a reasonable assumption. But it was wrong, this is a structural issue:

"(a) Things change too quickly, breaking both open source and proprietary software alike; (b) incompatibility across Linux distributions. This killed the ecosystem for third party developers trying to target Linux on the desktop. You would try once, do your best effort to support the 'top' distro or if you were feeling generous 'the top three' distros. Only to find out that your software no longer worked six months later. We missed the big picture. We alienated every third party developer in the process.

What we did wrong: backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux.

So Linux was left with idealists that wanted to design the best possible system without having to worry about boring details like support and backwards compatibility. The only way to fix Linux is to take one distro, one set of components as a baseline, abandon everything else and everyone should just contribute to this single Linux."


–Miguel de Icaza, "What Killed the Linux Desktop" (2012)

Being wrong might be good – if you bother to correct what's wrong (preferably before imposing what's wrong to the world). But here, for operating systems (an OS is the very basis of any software stack), not many people were eager to recognize their mistakes. And even less people merely tried to correct them – leading to a perpetual, ever-growing mess.

A 30-year old OS (kernel, LibC and other usermode interfaces) should be well-documented, stable and debugged. If it's not the case then you have a very serious management problem. How this could last 3 decades is beyond the unacceptable. Accountability matters: these cumulated decades of inconsistencies cause hundreds of billion dollars of losses to all end-users, every year.

Worse, the people in charge actively reject any serious contribution fixing the sorry state of things:

When the (theatrical?) C (193 CVEs since 1987) vs Rust (16 CVEs since 2012) religious battle shaked the Linux kernel (at least on online media), I have offered half a dozen prominent directors of the Linux Foundation to donate SLIMalloc because, hey, it's making C "memory-safe" while accelerating the code. Guess what, nobody merely replied.

They claim to be "idealists that want to design the best possible system" but they seem to be asleep at the switch, or defending a walled garden of ever-growing, artificially-created backdoors:

"The 'many eyes' of open source are blind, uninterested, or selling to governments for profit."
–Brad Spengler, Open Source Security, Inc. (2012)

Oh. I am not the only one noticing that there's a serious unaddressed problem. This is a long-term war of well-funded legions of people betraying the common-good against anyone doing the job correctly. Their mobile? Follow the money said Brad!

So, instead of hopelessly trying, like NGINX, to cope with an endlessly growing set of system issues (and transfering that cost to end-users), to revive a 2014 ever-crashing G-WAN, I have opted, for a more reliable way to make G-WAN run durably on Linux: static linking. A choice that all Linux distributions (all but Alpine Linux) deny to Linux users: GNU LibC is designed to fail with static-linking (cui bono?).

G-WAN can't force people to use a statically-linked distribution, nor I can link G-WAN statically with musl LibC and yet keep supporting JIT servlets linked to 18 programming languages runtimes using the GNU LibC... except if G-WAN embeds a dynamic module loader and linker (in which case it can work in both cases). But, it's worth noting that such contorsions are only due to the poor technical choices of the usermode layer of the OS.

Had the dynamic-linking choice been accidental, every distribution would have copied Alpine Linux.
They didn't, proof that this bad choice was not accidental.

How can it be that dynamic-linking, yet another "insult to the human brain", the infamous "the Microsoft Windows DLL Hell", has infiltrated Unix and survived more than 30 years – in an operating system made by people considering themselves as the best of the best?

In 1984, a Turing award reminded us how the U.S. DoD explained in 1973 how to penetrate computer programs "without detection":

"No amount of source-level verification will protect you from using untrusted code."
–Ken Thompson, "Reflections on Trusting Trust", Communications of the ACM, volume 27, number 8, pages 761-763

On Windows and Linux, this vulnerability is enforced by a LibC (or other languages runtimes) designed to work only as shared libraries (that your programs will rely on, before and after they have been remotely updated).

Exactly like when our communications and Web sites are pirated while we have been educated to blindly trust third-parties in love with ubiquitous kill-switches. And so we:

  1. embed Web resources hosted by third-parties in our Web pages (JS, fonts, pictures, videos) that can trigger vulnerabilities (or break features) on both sides (clients and servers) – even on a per-case basis (targeting your largest customer for example),

  2. use Web browsers doing encrypted telemetry (collecting everything typed at your keyboard, done and said in the room, all modified files on your disks, selling backdoors [1] to anyone paying for remote access to your machine; nobody cares, so the sky is the limit – note that the same clandestine activities take place in our smartphones and connected-cars, a regulatory obligation since 2006),

  3. pay for "SSL certificates" that are bypassed by hundreds of thousands of "root certificates" used by tens of thousands of government agencies and private companies (also without oversight),

  4. deploy "secure" SSL and TLS layers which are much more vulnerable than the than the "unsafe" HTTP/1.1 protocol (HTTP/3 needs twice the server hardware, and enforces TLS and DoH (usually already used by HTTP/2), cutting essential security features: traffic oversight and DNS hosts blacklists used for decades by network administrators eager to control and limit what's happening on their LAN):
    Version Date Specs Key Features
    HTTP/0.91991-TCP, one-line text protocol with only the GET method
    HTTP/11996RFC 1945TCP, status codes, HTTP header, optional keep-alive connections, POST and HEAD
    HTTP/1.11997RFC 9112TCP, by-default keep-alive connections, more methods, etc.
    HTTP/22015RFC 9113TCP, binary framing layer, multiplexing, header compression (HPACK), server-side push
    HTTP/32022RFC 9114UDP, QUIC, TLS by default, header compression (QPACK), connection ID, more about it here

  5. forget that the only way to guaranty mutual authentication and payload integrity (without delegating all our security chain to third-parties) is for the DNS/Web/Email/VPN Apps to actually let end-users sign and verify themselves their requests/responses,

  6. deploy ever-failing encryption standards:
    "The move away from prescriptive standards towards a focus on outcomes under the NIS Regulations was welcomed because: standards are soon rendered out-of-date by fast-changing threats and the frequent discovery of previously unknown vulnerabilities".
    –Cyber Security of the UK's Critical National Infrastructure

Is really the taxpayer happy to see his own money constantly used against him? Would we continue funding so generously the ones betraying us if we had the choice? Certainly not – and that's why the taxpayer is not given a voice about where his money goes (governments are by far the largest Cloud buyers)!

But there's worse. And this time the imperatives of "Defense" (which is "Offense" in reality) cannot be invoked: it's mere fraud.

Static linking explains how G-WAN has survived despite the Linux planned obsolescence, but it does not explain G-WAN's massive progress in performance.

While ditching GNU LibC calls I wrote SLIMalloc, a memory allocator that was faster than all others in 2020... on the top of having no "memory-safety" vulnerabilities. Why? Because, surprise-surprise, most of the ever-growing errors affecting G-WAN were "memory-safety" errors (courtesy of GLibC(!) as my 2023 SLIMalloc paper illustrated it in graphic details). GLibC was in good company: all the memory allocators were weaponized: their flaws were well-known and... actively exploited (most of Google Chrome vulnerabilities are memory issues, and Google Android has the largest percentage of memory issues of all OSes).

Linux man pages, XKCD Manual Override

The XKCD "Manual Override", reloaded:
All these years, as I was rewriting some of the many GNU LibC APIs, it was clear that many were redundant and pointlessly complex, involving the memory allocator for no reason, etc. (too many untrustworthy, technically-unjustifiable design choices).

Like two decades earlier (when I revisited "secure" protocols and encryption standards) this made me increasingly doubt of the will of the people in charge to work for the common-good: it was not clear if some half-backed features like epoll(7), aio(7) or io_uring (praised for merely delivering 10% gains in corner cases) result from a lack of competence, or from a plan to waste people's time by increasing their learned helplessness.

G-WAN has made this giant leap in performance, durability and security by questioning the operating system – and by avoiding its many traps designed to keep us captive and impotent.

Nobody was helping me, nobody was eager to pay me, and myriads of trolls were sabotaging, censoring and denigrating my work, many of them appointed by our governments. Two decades without revenues have left my family in a difficult situation, reducing my business options, our kids' options and their future.

Like Galileo Galilei (1564-1642) hitting a wall of denial erected by his astronomers peers funded by religious authorities, I have been targeted by the "Holly Inquisition" condemning my achievements for contradicting their gospel. My only sin was to do the right thing. Shame on the many that have used State-powers and public money to hurt me, block progress, and hide their criminal acts.

Back to the OS planned obsolescence.

If you discover that you need this list of endless workarounds (say, to keep a program working despite constant system changes), then this information may have value (especially when disclosed before the gratuitous system changes are deployed).

You may consider paying for it. But then you will depend on someone (who has demonstrated a natural inclination for extortion). By merely delaying your access to this list, this someone can raise prices or eradicate your business, by the press of a button. Further, having paid for something without any guaranteed value, you will have no recourse.

Since this list is constantly obsoleted by new arbitrary modifications and trivial bugs, then it does not look like technical progress at all. It's a cheap trick to keep the competition at bay: only the ones at the switch will be able to make their programs working at all times... because, well, they are the ones that have created the problems in the first place.

As SLIMalloc has demonstrated, the so-called "security" industry generates its revenues exactly in the same deceptive way – with a never-ending trap and never through progress: on the contrary, as time goes the security market grows exponentially, demonstrating the inefficience of eye-wateringly expensive "security" products which themselves are... injecting critical vulnerabilities! (Kaspersky and Symantec show that this is an international issue). Light is the best disinfectant.

The open-source ecosystem (GNU, Linux) did not invent this trick: since day one, Microsoft had a very similar system in place, and Remote-Anything (TWD's 1998 first product, with 280m licenses deployed in 138 countries) had to replace several Win32 API and LibC calls on a weekly basis to merely stay alive (making the executable file grow, yet run faster, and more reliably).

RA was surviving – thus the need for the Microsoft VIA (Virus Information Alliance) well-funded proxies, and later for Windows "Defender", to eradicate the (from their point of view undesirable) competitors that do not directly contribute to Microsoft revenues by paying a tax to merely see their programs survive... an operating system acting like an ever-reconfigured mines field.

Microsoft IIS miserably dying at birth, on a mosdest load

The exponential scale of this 2009 benchmark allows to understand why Microsoft Windows "Defender" felt the urge to erase G-WAN... 3 days after a conference call with 5 Microsoft directors eager to buy the G-WAN source-code to improve abysmal Microsoft IIS v7+ performance.

Microsoft operates as a criminal organization – read the U.S. DoJ findings (they can't "debunk" officially collected evidences, so they bribe officials to erase public records and to avoid DoJ sanctions).

According to the rule of law, all of these tactics are criminal, and if the laws were applied then these illegal behaviors would disappear overnight. That's why officials are paid for the laws not to be applied – or only with symbolic sanctions (immediately compensated by golden subsides, public and private investments, and recurring public contracts).

The result of destroying competition is more expenses for end-users (G-WAN allows you to do more with less, and that's why it has been illegally deleted by Windows, and ousted from the market by coordinated sabotage, censorship and denigration) by G-WAN competitors.

Deceiving others does not require much talent (nor efforts) – and so, most probably, it explains its success among the people cumulating limited intellects, skills, and respect for commonly-praised human values designed to let people live together – you shall not murder, steal, lie, etc. (fostering productive cooperation) instead of constantly stealing and killing each-other (a sterile zero-sum game).

Even worse, unlike others, the people-naturally-inclined-to-cheat (rather than to perform themselves) are now in charge to manage others, and their addiction for undue honors is encouraged by a string of easy (yet self-defeating) successes that invariably leads them to even more limited intellects, skills, and respect for human values. Tip: these characteristics are most often a reliable way to spot them, this and their endless desire to eradicate anyone doing better (instead of seizing an opportunity to learn and make progress).

If you wonder where all the money of the OS vendors ($9Tn) comes from, they are a publicly-funded and protected racket:

  • closed-source forces its "strategic partners" to "Pay the Bill to enter the Gates" [in the 2000s, a $20m entry ticket for the long-secret "Native APIs"] (for free, they can enjoy a free promenade on their minefield, with Windows "Defender" killing the survivors),
  • open-source charges "consulting fees" for complete, up-to-date documentation (for free, your will see your perfectly working programs miserably die, and discover month if not years later why and how it has been sabotaged by arbitrary system changes).
  • closed-source and open-source are a distinction without a difference.

These two business-models claim to compete – but they rely on the exact same treacherous tactics to stay in power.
Like political parties, they form one single ecosystem made of many well-funded organs, the fake, controlled opposition.
Like cyber-security: SLIMalloc was censored and denigrated by the CTO of a cyber-security, U.S. Defense contractor.

This same scheme rules all the domains of our society (toxic placebos are sold as a false cure to an artificially created problem, leading to more placebos to correct the side-effects of the previous toxic placebos, and so on – with spiraling costs).

From time to time, some "experts" present themselves publicly as THE reference to consult (at a price, obviously: "free-software" is free, but not the information about how to make it run, properly, or durably).

To promote themselves, they often form alliances with similar people, and help each-other by constantly attacking their mutual competitors, but most of the time they are just mercenaries available for hire. Microsoft "Evangelims is War" is famous for how such toxic ecosystems have been created and endlessly funded (despite being totally illegal):

"Evangelism is War", a 1997 "Microsoft confidential" document, was written to train Microsoft employees about how to bribe journalists, consultants and academic sources to have them publish... biased information.
More "Microsoft confidential" papers came later to fill some blank areas and many more documents, like emails, were seized by the U.S. antitrust authorities.
Here are some enlightening excerpts (light is the best disinfectant):


 "The elements of evangelical infrastructure are conference presentations, magazine articles
  (media press), white-papers etc (pseudo technical reviews) and they start hitting the streets
  at the start of the 'Slog'. They should be numerous so as to push all other..off the shelf.

 Working behind the scenes to orchestrate 'independent praise of our technology, and damnation
 of the enemies' is a key evangelism function during the Slog.

 'Independent' consultants should write columns and articles, give conference presentations and
 moderate stacked panels, all on our behalf (and setting them up as experts in new technology,
 available for just $200 hour).

 A stacked panel on the other hand is like a stacked deck. It's stacked with people, who, on
 the face of things should be neutral, but who are in fact strong supporters of our technology.

 'Independent analysts' reports should be issued, praising your technology and damming the
 competitors (or ignoring them).

 'Independent' academic sources should be cultivated and quoted (and research money granted)."
 

Their proudly say that "the Matrix has you" because that's an invisible, endless trap and most among us do not even suspect it's there.

  Such treacherous tactics, when funded by big money, leave little room for merit, if any, hence their constant shameless lies.

As a direct consequence, the quality of all financially-successful technologies and products falls – like the know-how and even the Temple of Knowledge (private and public academic research) is corrupted from top to bottom. In the total lack of sanctions because the authorities were selected and/or paid to look elsewhere, it is not accidental that the worldwide 2019 health crisis has been yet another perfect execution of this lame tactic (by the same people): if constant sabotage has worked on computers, why not do it to humans?

A 2012 G-WAN discussion at Phoronix (who made benchmarks on many CPUs that... confirm the fairness of our Nginx tests):

Benefiting from their anonymous accounts, some are using very technically-invalid arguments (incorrect statements about G-WAN's features and architecture, and even accusatory reversal, a very common tactic used by trolls) while others depict their own experience with G-WAN (which is then backed by hard-facts and real knowledge of G-WAN). Yet, the only metrix that seems to matter (and to calm the trolls) is the actual volume of negative contents (when it has finally overwhelmed the positive feedback).

In particular, being "open-source" is falsely linked to "better security" (ask Brad!) while, in the real world, security researchers throw random inputs (fuzzing) to compiled programs to find bugs (that may then be exploitable or not). These trolls, like most of this technical audience, know very well that they are using plain lies, yet nobody contradicts them (telling the truth is not without risks).

This fallacy (of "open-source" offering "better security") is further demonstrated by the fact that G-WAN (unlike Varnish or Nginx) did not have security vulnerabilities since its launch in 2009 (forcing the trolls to fake this one)... despite being an Application server supporting 18 scripted JIT programming languages (a much, much larger surface of vulnerability than any server ever published in the whole History of the Web)!

At 7 minutes of interval – and only 2h after the discussion was created, the authors of LWan (a copycat) and unsafe and slow Monkey (here vs G-WAN on the same Core2 CPU) are on page #1. How did they know so fast (and at the same time!) about the G-WAN Phoronix discussion? L. Peireira (alias "foobrain") even created a Phoronix account that same day to insert a comment ("foobrain, Junior Member, Join Date: Jul 2012, Posts: 1" and has never made any other posts later). Here, they presented themselves as neutral, honest people... while the latter (a Seatle resident) had created a hate blog with the now-fashionable domain name "gwansucksballs.com" directly insulting me and stating that G-WAN was "nothing but a scam". The former gentleman was just a tone less virulent, yet publicly accusing me of having done the unfair things that he had an irrepressible passion for.

This Phoronix discussion ended as a cover-up (most of the myriads of hate anonymous blogs are gone away now) to pretend that G-WAN "was not so fast after all, it has been fast, but it's no longer the case". At this time, I had not yet deployed the latest Linux distributions, so I could not know that they already had made changes to make G-WAN constantly crash – killing its performance (and gradually all its features, one by one, up to the point of not being useful any more).

In 2025, I have bought a Phoronix subscription and offered to work with them to benchmark the new G-WAN.
Michael Larabel, the owner, did not reply.

That's the same tactic used by the Wikipedia and Stackoverflow administrators: tens of thousands of G-WAN Q&As have been erased... to only let negative contents, if any (Wikipedia even removed the G-WAN 'discussion' page explaining why it was censored). Their ultimate goal seems to be to occupy all the space, whatever it takes.

How to call a thing presenting itself as "free and open markets" where only the least efficient solutions have the right to exist?

A Self-Evaluated 'expert', aiming to make the form match the function (HAProxy's Willy Tarreau photo ranking first on three search engines)

The most sneaky ones pick an inactive or retired target (a kind that rarely retaliates) feeling that they are immune to judicial measures.

I have been really busy working hard for a while. They, instead, troll competitors, an activity consuming almost all their agenda, hence the poor progress of their products: NGINX, like all the other servers listed on Wikipedia, are hundred times slower than G-WAN!

That's how I learned about Willy Tarreau (HAProxy), a self-evaluated, well-promoted "expert", aiming to make the form match the function.


March 2017 comments about a 2016 Web servers benchmark made by... "Jarrod Farncomb" on March 9, 2016:

Willy Tarreau March 3, 2017 at 9:34 am (coming a year later)

Jarrod, your sysctls are completely bogus I'm sorry :

– tcp_mem counts in pages, not bytes so you allocated 114 GB of RAM to the TCP stack
– somaxconn is 16 bits so 100000 doesn't fit, is rejected and either the default 128 stays
(recent kernels) or only the lowest 16 bits are used (34k)
– tcp_rmem and tcp_wmem default values cause the system to try to allocate 30MB for the read and write buffer upon each accept/connect, that results in disastrous perfs.
– tcp_tw_recycle must NEVER be set (never ever) otherwise you'll randomly see some fantom sockets closed on your client but still established on the server, causing jerky traffic spikes.
– the other ones are clearly random values padded with zeroes

Also it's not mentionned whether or not you properly stopped iptables and unloaded conntrack modules (nor if you left it without tuning it). You cannot claim to correctly compare products with such settings, these bogus settings add a huge amount of randomness in your measures. I'm not surprized Valentin got much better values. The problem is that some people with copy-paste your settings for their production servers and report issues to the product vendors.

Jarrod March 3, 2017 at 9:46 am

That's possible, I didn't create them after all. I believe I used the same settings defined here: http://gwan.com/en_apachebench_httperf.html

This was because I came across G-WAN initially which is what peaked my interest in performing my own tests.

To be honest I can't remember the specifics like iptables as this was over a year ago, however it would have been the same on all tests, and as long as that is the case then I believe the comparison between them all is still valid in this aspect.

Additionally I advise not using these settings in production in the post and state that they were only modified for benchmark purposes.

I'm more than happy to take advice from an obvious professional in this area such as yourself before I perform future testing, feel free to advise me on all correct settings that I should use.

Willy Tarreau March 3, 2017 at 10:29 am

Wow, gwan being a server vendor they have zero excuse for doing these huge mistakes! I thought they were doing serious stuff now I have the proof that they don't know what they're talking about when it comes to performance, which they claim is their main differenciator. –Willy Tarreau (HAProxy)

The horrendous charge of Willy Tarreau (HAProxy) is not only factually (1) misattributed (INTEL Research wrote these sysctls) and (2) misplaced (INTEL was right), it is also totally (3) gratuitous (G-WAN did not compete with HAProxy).

Further, the delay between the questions and answers suggests a coordinated fraud: one year after the blogpost, Jarrod answers to Willy's comment in 12 minutes (other users get answers in days, not minutes, except for a few comments of people suggesting, under obvious pseudonyms carrying their own message, that other HTTP servers should be tested).

Even more revealing, my reply correcting the false, injurious statement of Willy Tarreau (HAProxy) was censored by "Jarrod" the blog author... who removed the real sysctls authors from his blog. Was it the same neutral and benevolent "Jarrod" that was so elegantly calling G-WAN "a piece of shit" to justify censoring it on Wikipedia and WikiVS?


How trustworthy are all these publication edited by anonymous authors and contributors that promote crap and censor good things?
Why do they need, every single time, to use "bad words" and lies? Could they lack technical arguments?

This kind of fake or complacent blogs, inconditional opportunistic collaboration in anti-competitive activities, or secret lucrative collusion against anyone doing better, allows them to pretend that they are genuinely working for the common good (rather than endlessly spreading Fear, Uncertainty and Doubt about a competitor, on gazillions of such blogposts, papers, magazines, conferences, etc.):

  1. G-WAN quoted sysctls from Intel Reseach (the ones that Willy Tarreau states "they don't know what they're talking about")
  2. [use Ctrl+F to search "Corporation" on today's page] [or on the 2012 Web Archive of this page]
    # "Performance Scalability of a Multi-Core Web Server", Nov 2007
    # Bryan Veal and Annie Foong, Intel Corporation, Page 4/10

    fs.file-max = 5000000
    net.core.netdev_max_backlog...

    G-WAN took these kernel options from pertinent Intel Research (Ctrl+F "fs.file-max"):
    Slides: https://www.cse.wustl.edu/ANCS/2007/slides/Bryan%20Veal%20ANCS%20Presentation.pdf
    Paper: https://www.cse.wustl.edu/ANCS/2007/papers/p57.pdf

    Willy Tarreau most probably knew that, in 2016, Intel was the main contributor of the Linux kernel.
    So, finally, G-WAN was right since 2009. And Willy Tarreau (HAProxy) publicly wrote what he knew was false – at the expense of the disinformed HTTP solutions buyers that he feels a duty to deceive and abuse – instead of trying to become better and match G-WAN, the target of his groundless denigration.


  3. What Willy Tarreau (HAProxy) has incorrectly called a "huge mistake" did not come from G-WAN, nor from Intel engineers. The author of HAProxy, a consultant that presents himself as a "Linux kernel expert" either:

    – did not know that kernel settings (their ranges of valid values and their units) may have changed between 2007 and 2016;

    – did not know that the kernel source code (and in particular the portion related to the sysctls) may be modified to achieve very different performance outcomes;

    – or he knew very well but faked to not understand – assuming that his readers would not be able to contradict him (and, miraculously, he was right: my reply was not published by the blog author).


  4. Why Willy Tarreau, author of the very slow and unsafe HAProxy, felt the need to make such an outrageous lie to merely publicly tarnish the unrelated but very fast and safe G-WAN remains an open question (cui bono?)... but he is a well-promoted conference speaker by a well-funded media ecosystem including Youtube publishing dozens of his videos.
    Someone has to pay for all that circus. And this someone certainly wants a return on investment. So, instead of being censored like many others, Willy Tarreau, despite his disastrous skills, is promoted by the ones that have censored Remote-Anything (1998), G-WAN (2009), Global-WAN (2010) and SLIMalloc II (2023).

These contents, which are denigration campaigns under the disguise of legitimate blogs, are then indexed by search engines (owned by the GAFAM). Their goal since 2009 is to hit G-WAN revenues, by (a) someone pretending to bring better information, and (b) abusing anyone not having the time (and/or the skills) to spot the lies.

This is not an accident, this is a well-funded criminal system:

Wikipedia and WikiVs list dozens of HTTP servers and Application servers, yet G-WAN, like SLIMalloc, has been constantly censored from all the platforms (Wikis, the G-WAN forum restarted 3 times from scratch was deleted by its hosting companies -when reaching more than 5k users, LinkedIn, Wordpress, Blogger, Stackoverflow erased my account 3 times and deleted thousands of Q&As, our dedicated symmetric leased lines were broken-down 2-3 times a week despite 24/365 SLAs with MCI-Worldcom, our pre-paid Press adds were showing competitor products, our domain-names were hijacked by VeriSign during months on servers promoting competitors, Digital-River acquired 5 credit-card platforms we used for online sales – and each time they have redirected our customers to competitor sites, selling our brands as keywords Google redirected Web searches for our products to our competitors, RA and G-WAN were erased by anti-virus products claiming that these "commercial products are not viruses", etc.).

Some falsely claim that "Uncle Sam" is the guilty – this is false: while the NSA, FBI and the DoD have often participated to these clandestined actions (of censoring some products and promoting others), they are funding and defending one single international community – against the interests of the U.S. taxpayer (forced to subside and consume the junk that kills him).

After I found the real (proxied) IP address of the WikiVS (and Wikipedia) vandal erasing the G-WAN articles, and while he was begging me for not denouncing him, Per Buer (Varnish CEO, Varnish is a slow and usafe proxy server, like HAProxy) confessed to me by email that he had deleted G-WAN dozens of times per day on such platforms... under several fake anonymous accounts using the TOR Web browser (aka the "Dark Web", a tool funded by the U.S. Navy) to hide his IP address.

Per Buer and Willy Tarreau are not living in USA. They don't contribute to the U.S. economy. Yet, their treacherous business is protected and promoted by civil servants paid by U.S. public money.

The recurring argument of Varnish and HAProxy people is that "G-WAN is a very un-notable software which has no value". Certainly, censorship helps bad products to keep good things like G-WAN unknown, as only the bad things (Nginx, Varnish and HAProxy) are enjoying the right to exist. The companies that have to revert to deleting their competitors' presence on the Web obviously make so sub-standard products that they would not have clients and funding (if the competition was not clandestinally eradicated, with the assistance of several government agencies). Yet most of these treacherous companies are millionnaires – and enjoy endless promotion on Wikipedia, Youtube and LinkedIn.

How do they make money? They proudly expose their successes on Wikipedia, so let's pick an example:

"The Fastly platform is built on top of Varnish" (what a stupid idea to use a Web-server "accelerator" that's slower than Nginx!).

Founded in 2011 by A. Bergman, the Fastly CDN (NYSE:FSLY, assets: $1.53 billion) had a 2023 revenue of $506 million and a net income of $133 million (11 times less than its assets) hiring former directors of Alphabet and CISCO. Bergman was the former CTO of Wikia, a privately held, for-profit Delaware company founded by Wikipedia co-founder Jimmy Wales and funded by Amazon.

Oops. This summary alone demonstrates that this community has a passion for incestuous business relations (Bergman was European, like Varnish – yet, while lacking any distinctive technology and know-how, they get a royal treatment from U.S. Venture Capitalists, and the largest U.S. market leaders).

These connections may also explain why Varnish is quoted on 64 Wikipedia pages (while G-WAN was censored from wikipedia by... Per Buer, the Varnish CEO). But following the precepts of Microsoft "Evangelism is War" (a self-incriminating evidence seized by the U.S. DoJ) make all of them deserve the same sanctions (that Microsoft had canceled, by President Bush Jr.).

In 2012, fastly would have been at least 10 times more efficient (and therefore much more profitable) with G-WAN... and in 2025, more than 1,000 times more efficient with G-WAN (rather than with the slow and usafe Varnish).

  • 2013 Fastly raised $10 million in Series B funding
  • 2014 Fastly acquired CDN Sumo (the capacity they lacked)
  • 2014 Fastly raised $40 million in Series C funding
  • 2015 Fastly raised $75 million in Series D funding
  • 2015 Google partnered with Fastly (because Google's 2.5 million servers were not enough)
  • 2017 Fastly raised $50 million in funding
  • 2018 Fastly raised $40 million in funding
  • 2019 Fastly filed for an initial public offering (IPO) and debuted on the New York Stock Exchange

$215 million (10+40+75+50+40) in funding and contracts with Google have helped to lift yet another nonsensical activity.

$215 million was only the begining: assets of $1.53 billion (top of the line hardware servers and network infrastructure) were much needed to make the pig fly (even a slow, wasting memory and CPU, unsafe software server like Varnish can shine if, without consideration for the pointlessly involved extra costs, you throw enough hardware to compensate its flaws).

Our pension funds, the ones investing in these disastrous companies, have been deprived from the revenue that a competent and honest management would have generated (had they chosen the most efficient solutions, rather than enriching their friends and family at the expense of the shareholders).

Pension funds unfunded liabilities

The money has not been lost: it has more than generously rewarded the worst industry players (and hardware manufacturers probably owned by Blackrock, Vanguard or State-Street – the owners of almost all this economy based on investing the money of pension funds in ever-growing pointless expenses to sustain an ever-growing debt).

This is not only the case in the USA. Many european countries, including Switzerland, have given the management of pension funds to private companies like Blackrock... despite their disastrous performance (investment funds and the companies where they invest make fortunes, but their clients are not so lucky).

Now, maybe, you can see how much it makes sense to more wisely selecting the people you trust.

Whoever is at the switch (promoting this recurring poor monetary allocation), you are doing it wrong. Wrong for encouraging a community of technically-weak players to consider competence as a nuisance, and for deceiving the taxpayer by promoting and selling the worse possible products – at the expense of investors, and end-users.

How competent and trustworthy are Per Buer (Varnish, quoted on 64 Wikipedia pages), Willy Tarreau (HAProxy CEO, quoted on 35 Wikipedia pages) or William Woodruff (Trail of Bits CTO, quoted on 5 Wikipedia pages) and their well-funded/promoted peers in comparison to G-WAN and SLIMalloc (that Varnish, HAProxy and Trail Of Bits have to denigrate and censor to keep pretending that they deserve our money)?

G-WAN is 453 times faster than NGINX (uncached 100-byte file, Intel Core i9 CPU)
  • no Web/cache/proxy/application servers made significant progress,
  • on a 10x faster CPU NGINX is slower in 2025 than G-WAN in 2012,
  • and G-WAN 2025 is several orders of magnitude faster than in 2012.

The Varnish/HAProxy/NGINX lies are so recurring and caricatural that it all looks like theater: the best is censored, and NGINX is sold $670m in 2019 (at the expenses of pension funds!).

The level of financial and scientific fraud is absolutely striking. And the taxpayer fits the bills to pay for their ever-growing expenses and extravagances.

Yet, the impunity is total, and the worse players are endlessly funded to make bad products and denigrate the very best.

According to the "Jarrod" benchmark commented by Willy Tarreau, G-WAN was much faster in 2012 on a 8-Core CPU than all the new servers he tested in 2016 on a dual-CPU 12-Core machine (5 years later).

The second time, G-WAN was not tested because, surprise-surprise, "Jarrod" wrote that G-WAN was crashing (thanks to a constant stream of gratuitous changes injected by OS patches and updates).

Further, Cloud revenues would obviously suffer if all their buyers had found that both the OS and the HTTP servers can be made several orders of magnitude faster and safer (making self-hosting a far more attractive option, finally).

For the OS vendors (which, coincidentally, are often also Cloud vendors), it was unacceptable to let the readers link the dots. An alternative explanation was needed. That's why Willy Tarreau falsely asserted that G-WAN (while in reality that was INTEL)... "as a server vendor they have zero excuse for doing these huge mistakes! I thought they were doing serious stuff now I have the proof that they don't know what they're talking about when it comes to performance, which they claim is their main differenciator".

There's something rotten in all the financed industry: too much money consistently going to the wrong persons, for the worse reasons:

Tricks and treachery are the practice of fools, that don't have brains enough to be honest.
Benjamin Franklin (1706-1790)


The ultimate result of shielding men from the effects of folly is to fill the world with fools.
Herbert Spencer (1820-1903)

I have presented some useful information about the I.T. industry, and shared rare knowledge acquired first-hand, the hard way. I hope that you are now able to see how and why there are two distinct classes of people, why and how they are different in nature, and why the promotion of good works is so rare that nobody can find them.

I have been working for more than 45 years in this industry, and I can tell you that this is not accidental. This is a method:

Mushrooms' law:
1. keep them in the dark,
2. cover them with shit,
3. cut them at the knee when they are growing.

The world can be a better place – but only if the constant taxpayer-funded lies are excluded from the equation. I am not alone to have reached the same conclusion, albeit from a different point of view (this is a ZeroHedge.com author). The world is changing.