Diagnosis is not the end, but the beginning of practice.
I have been contacted and requested to write an article about AI – and since I did not get news from them I have published it here (they have finally published it, more than a month later).
In contrast with the disastrous quality of the articles that are enthusiastically published to promote AI, my text was comparing today's instruments to those in use 30 years ago, and introduced new concepts of the kind so badly missing to make progress.
Having followed the "AI" players for 43 years, I indeed have some insights to share.
Theoretical and practical arguments are presented that are much needed to make progress in a discipline that, a few months ago, was in a state of "freezing" according to its specialists.
That was before a new wave of hype erased this "perception" with ChatGPT (a chatbot, something called Eliza 60 years ago) as the only word worth spelling in town... despite world experts having criticized ChatGPT in graphic terms:
Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity. ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.
So, if you are wondering what "AI" is in reality, or if you want to discover new ways to make (real) progress, keep reading to discover my article first titled "Making Artificial Super Intelligence (ASI)".
As a designer of CPUs based in the U.K., ARM has tried to address the "memory safety" issues that the NSA, rightly in our humble opinion, attributes to the OS vendors ("their vulnerabilities") properly documented by Paul Hsieh's Microsoft Watch page.
Having received a request to compare the new security features of the recent ARM CPUs to SLIMalloc, we have searched and finally found a description of their method.
ARM has designed MET with the assistance of GOOGLE but since most of the published documents are marketing nonsense, it requires some time and dedication to find enough facts to evaluate their combined works.
As the left picture suggests, this is a coloring scheme. For this to work, all memory accesses MUST be made from an address that belongs to the SAME COLOR used by the whole memory area we attempt to reach.
ARM MET access memory violations trigger a CPU-generated fault crashing the process (unlike SLIMalloc which gracefully blocks, documents and recovers errors).
Do ARM CPUs boosted by MET make our OS and applications safer (and run faster) like SLIMalloc does? Let's have a look!
In those troubled times people wonder what 'transhumanism' aims for, and what the motivations and pursued outcomes are. How comes you can watch forever the best experts give you "answers" – and still wonder what this is all about? Either they don't understand the matter – or they don't want you to understand it. In both cases, that explains why their painfully long and boring speeches lead nowhere.
In "The Island of Doctor Moreau" (1996), Marlon Brando plays the role of Dr. Moreau, a mad scientist who experiments on animals, using a wireless kill-switch he carries as a medal on his chest to guaranty his security.
He ends eaten by his creatures after they have found how to remove the implant (today either injected or absorbed with food) used to remotely inflict pain and even death if they dare to keep disobeying to direct orders.
So, instead of repeating what you can find everywhere else, we will focus on what nobody is telling you.
And this must start with a simple question: Cui Bono? (who benefits?)
The COVID-19 crisis would be seen in this respect as a small disturbance in comparison to a major cyberattack. We have to ask ourselves, in such a situation, how could we let this happen despite the fact we had all the information about the possibility and seriousness of a risk attack.
Following this call, hundreds of billion dollars of public subsidies and public contracts have then been injected by (broke) governments (borrowing from private finance) in large private companies (owned by private finance).
Did end-users get something valuable in return?
Or was it just (yet another) outright institutionalized fraud? (the taxpayer pays back the extravagant debt-financed government expenses).
Tip: the industry expects the cyber-security market to more than triple within a decade – proof that this market is a fraud: the cost of the "protection" can only grow if this "protection" creates more problems than it solves.
Only the taxpayer has to suffer – because he is the only one paying the ever-growing bills.
are "memory safety" issues (due to the OS memory allocator).
Memory issues bypass encryption, intrusion detection systems, firewalls and even Security Operation Centers – making it pointless to this much money in ever-failing "security" tools – at least, as long as the elephant in the corridor is not addressed.
Here is the (Saudi, UK, and US) paper's introduction:
"Conventional cryptographic schemes based on data encryption standard (DES), advanced encryption standard (AES), and Rivest, Shamir, and Adleman (RSA) encode messages with public and private keys of short length. The main advantage of these algorithms is speed, and the main disadvantage is their security, which relies on computational and provable security arguments and not on unconditional proofs."
Note: in Academic jargon, "provable security" means that scientists are allowed to "prove" that something is 100% safe... until it is broken. Example: "RSA is provably-unbreakable"... under the (usually untold) assumption that no publicly available algorithm or machine can factorize big numbers quickly enough to compromise its security. In contrast, "unconditional proofs" are supposedly NOT relying on any assumptions (hence their value... and scarcity).
"Here we develop a physical realization of the OTP [One Time Pad] that is compatible with the existing optical communication infrastructure and offers unconditional security in the key distribution."
This patented work has been published in Nature Communications on December 20th, 2019 and they are not shy about it:
"This system is the practical solution the cybersecurity sector has been waiting for since the perfect secrecy theoretical proof in 1917 by Gilbert Vernam. It'll be a key candidate to solving global cybersecurity threats, from private to national security, all the way to smart energy grids." – Dr. Aluízio M Cruz, co-author of the study
Is this really what it claims to be? Let's have a closer look!
It might come as a surprise to many, but the obstacles facing "balanced" relationships are always the same, whether
the economy, finance or technology are considered because the only point that matters is "sovereignty
Sovereignty: the full right and power of a
governing body over itself, without any
interference from outside sources or bodies.
You can only be 100% sovereign or not at all:
there is no way to be "partly-sovereign".".
As Adam Smith (Scottish economist, 1723-1790) demonstrated it in "The Wealth of Nations", the "open and free markets" depend on the sane execution of "fair competition" and "Justice"... when self-discipline among government, finance, and merchants is lacking:
I sincerely believe that The Economist's provocative quote of Adam Smith is not the definitive answer to this old and universal problem: opposing the rich and the poor is purposely made to prevent people from spotting what matters, in reality.
Like Adam Smith, I think that market distortions are the problem rather than wealth coming from well-informed customers enjoying real choice (so they can vote with their purchases and promote true value – the core engine of true capitalism). So, in this article, I will present a way to make wealth compatible with sane competition – even in the absence of self-inflicted discipline.
To the credit of Adam Smith, such an option was not available at his time.
According to Wikipedia, the "Security Theater is the practice of investing in countermeasures intended to provide the feeling of improved security while doing little or nothing to achieve it."
That helps to explain why The Economist wrote that cyber-security is a "market failure"... far before the chaos that we all see now.
Unlike "post-quantum" security relying on assumptions ("number-theoretical security assumptions", today's publicly-published quantum algorithms and the unreasonable hope that new quantum methods to break what is considered quantum-safe today will never surface), "unconditional" security is future-proof because it is assumptions-free.
These unpublished assumptions impact our daily life: Academia's formally-proven LTE "forgot" to feature packet integrity, like the formally-proven Wi-Fi "forgot" to verify the security of the protocol handshake. Both mistakes proved to be critical vulnerabilities... many years later.
The "Security Theater" is sustained by the varnish of respectability of a scientific community increasingly lacking credibilty:
Key-management is as important as data encryption because if you don't do it safely then your long-term secret encryption key will be compromised even before you start using it to actually encrypt data!
This is why, in this article, we will see what makes today's security standards fail and we will present the requirements to deliver "unconditional" security – in key-management (generation, storage, derivation, exchange) as well as in data encryption.
Today's universities teach the world that unbreakable encryption is "technically impossible", hence the ever-failing US standards enforced by international policies.
Under his own words, "the most widely acclaimed security expert in the world" contemptuously calls unbreakable encryption "Snake Oil" (he personally made me the great honor of such an email in 2013... despite a first 2008 government audit of TWD's 2007 technology).
Discuss "unbreakable encryption" publicly and myriads of supposely competent people will furiously call you a "Charlatan".
Yet, a book written 22 years ago by undisputed encryption experts explains how to write your own "unconditional encryption" ("unbreakable" in academic jargon because "assumptions-free"). The Germans have a proverb for this kind of engineered dissonance: "Lies have short legs".
"Handbook of Applied Cryptography" (780 pages) by Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone, prefaced by Ron Rivest (the 'R' of RSA, Inc.):
"The current volume is a major contribution to the field of cryptography. It is a rigorous encyclopedia of known techniques, with an emphasis on those that are both (believed to be) secure and practically useful. It presents in a coherent manner most of the important cryptographic tools one needs to implement secure cryptographic systems, and explains many of the cryptographic principles and protocols of existing systems."
This book can change the life of every tech user on the planet – and even prevent wars – I am not kidding. So, if you are interested in computer programming, cyber-security, encryption, consumer payments fraud, blockchains, the security of our common critical infrastructure, or merely about your own privacy, then keep reading (and share this document)!
The IoT (Internet of Things) and AI (Artificial Intelligence) will be cataclysmic failures if, because of the lack of any effective and durable security, we can remotely interfere with, infiltrate and sabotage devices and communications, everywhere, at any time:
"It's not the IoT devices themselves that will deliver the biggest breakthrough – it's the ability to connect them to securely exchange
information and deliver it to users."
Lockheed-Martin, "How The Internet of Things (IoT) Is Transforming Modern Warfare"
As a Defense contractor, Lockheed-Martin knows about security, but The Economist's point of view is even more revealing:
"There's a market failure in cyber-security, made worse by the trouble firms
have in getting reliable information about the threats they face."
The Economist, "Market failures - Not my problem"
"To avoid lurid headlines about car crashing, insulin overdoses and houses
burning, tech firms will surely have to embrace higher standards.
The Economist, "The Internet of things (to be hacked)"
How the better-funded-than-anyone US Defense contractors, tech and cyber-security firms can have led to a "market failure"?
PQCrypto (an EU Academic/private "multi-million Euro research project" with MasterCard and Intel on the strategic advisory board) has issued initial recommendations presenting their solution called SPHINCS to the threat for today's Public-Key Encryption (PKE), Symmetric-Key Encryption (SKE), and hashing standards:
PQCrypto explains that hash-based PKE (SPHINCS) is desirable because the number-theoretical security assumptions of other PKE schemes are less well-understood (each PKE family's key sizes are available here):
Finally, PQCrypto says its logo is a turtle because "Post-Quantum security is much more complicated and therefore much slower". The only sane way to contradict someone is to do better.
Recently, I have been interviewed over the phone by a Google recruiter. As I qualified for the (unsolicited) interview but failed to pass the test, this blog post lists the questions and the expected answers. That might be handy if Google calls you one day.
For the sake of the discussion, I started coding 37 years ago (I was 11 years old) and never stopped since then. Beyond having been appointed as R&D Director 24 years ago (I was 24 years old), among (many) other works, I have since then designed and implemented the most demanding parts of TWD's R&D projects – all of them delivering commercial products:
Google's representative stated that both management and up-to-date coding skills were required (a rare mix). But having exercised the former for more than 2 decades and the latter for almost 4 decades was not enough: I failed to give the "right answers". Is Google raising the bar too high or is their recruiting staff seriously lacking the skills they are supposed to rate?
Let's have a look!
In the past, G-WAN was attracting web developers. Since last year, it brings CTOs that have exhausted all their options. Most of them have met increasingly interesting scalability problems – the kind that no on-the-shelves product can possibly address.
This CTO wanted to unify several different financial systems involving a total of more than 150 million messages per second.
Collected and sorted by category, priority and type of audience, all these messages had then to be reliably dispatched in real-time to more than 1.5 million users, most of them with the status of paying subscribers – the kind notoriously eye-watering expensive to disappoint.
After a couple of prior failures involving gargantuous open-source projects designed-to-scale but not-to-perform, spiraling budgets and long overdue results this CTO was ready to embrace anything that could "just do the job".
Preferably right now.
While talking about business trends with a circle of executives, the director of an R&D center in Asia lamented about the lack of "true" competences. Intrigued about what his company could be missing I invited him to explain what he meant.
He said that, far too often, pouring 20 or 200 persons on a project made no difference in terms of innovation. While several of their R&D labs work in different countries, all seem to produce similar results, and this is like an invisible wall which cannot be broken, whatever the investments.
This reminded me the years spent at large industry players like THALES in France or SPC Corp. in the US. Large companies often have a "postdoc" policy for recruiting R&D staff. University alumni often hire pals, people that look like them, or at least candidates with a common background.
Not willing to restart the ancestral debate between engineers vs academics, I promised to send him a small test that would both illustrate the nature of the problem – and a quick way to help resolving it.
At this time, I did not anticipate the scale of the feedback – nor the impact that such a benign confidence could have provoked.
The short answer is "time". If an innovation is really useful then everybody will eventually use it, one day. But well-known innovations may stay dormant during centuries – as long that people don't believe that doing such a thing is possible. Let's see how this happens, how different actors play in favor to (or against) disruptions, and why.
"An invasion of armies can be resisted, but not an idea whose time has come."
– Victor Hugo
Mail is a good example. We all know that it started with guys running from one point to another to deliver a message. Then horses and boats transported letters, and we got railways the telegraph, telecopies and the Internet.
But very little among us know how much innovations have to wait until they are enforced by the authorities, with enough clout to leave the state of a mere curiosity. For example, in the 18th century, French king Louis XV replaced religious, academic, and other postal services by the Royal Post, a monopole given to the "Black Cabinet" to spy on supposedly ever conspirating people.
Electricity, seen in the sky and used by Thales (600 BC), a Greek philosopher, in experiments, had to wait until 1750 to find new insights with Benjamin Franklin, a largely self-taught researcher, 'Founding Fathers of the United States of America', Pennsylvania President, several times US Minister, slaves owner, and large-scale securities speculator (his face is on 100 dollar bills).
Soon after, Alessandro Volta invented the battery in 1800, and Michael Faraday invented the electric motor in 1821. Things accelerated then with Nikola Tesla, Thomas Edison and many others who contributed to the "Second Industrial Revolution", where electricity left the status of a near-magical mysterious force.
Almost all Web servers and Database servers distributed today are "highly-scalable". Or so say the vendors because that's a keyword that end-users, media and search engines value as the promise for big savings.
And scalability indeed matters: that's the ability to perform well while concurrency grows. For a server, it means that satisfying one or 10,000 users must be done with a low latency (this is the application responsivity, also called 'user experience').
Let's consider G-WAN and the ORACLE noSQL Database.
This Demo will be been presented in the noSQL and Big Data sessions of the ORACLE Open World (OOW) 45,000-person event held in San Francisco on Sept. 30 – Oct. 4, 2012:
On the left, there is a photo of the session pitch taken by Alex.
The G-WAN presentation (the G-WAN-based PaaS) can be found here.
If you provide a successful service, then you have to make your application(s) scale. There are two dimensions which can then be involved for scaling:
The second way of doing things, scaling vertically, was introduced in the early 2000's on commodity hardware (a decade ago). Despite multicore being now ubiquitous, from cellphones to Data Centers, network switches, TVs, planes, trains, boats, automobiles, laptops and desktops, there is still a wide gap as far as software development tools are concerned.
The question is not whether or not you will have to use parallelism (adding a dimension to the equation grants you access to exponential gains), the question is about how to do it to reduce costs. After all, you have paid for this hardware so not using its capabilities is quite a pity.
The total lack of uncertainty.
Very few things are considered "secure" because the sun can blow us all without notice, because the ground can melt in a volcanic eruption, and because each of us can die from a heart attack – at any moment.
So what makes something "secure" must be independent from the known and unkown laws of the universe – and, more generally, anything that we cannot control, like exterior conditions.
And this is not as difficult as it sounds.
Here, a Salt Lake City media (Salem-News has 98 Writers in 22 countries) is sharing views about the costs associated with the distribution of information.
The U.S. state of Oregon, located on the Pacific Northwest coast, hosts large datacenters (Google, Amazon, Facebook, etc.) to take advantage of cheap power (hydroelectric dams) and a climate conducive to reducing cooling costs.
Having received an invitation to provide insights about how software can contribute to making this industry sustainable on the long term despite the explosion of Internet clients, we have tried to extract from our experience a point of view rarely mentioned in purely factual studies.
While finance and equipment surely help, we explain why we believe that the human factor can play a decisive role in this picture.