If the Machine Is Using English, Won’t it Be Too Slow?

It depends what it is doing. A person working with one document on one machine should be fine. But it is not as simple as that – the document may reference other documents, which may use different definitions for some of their words, and this has to be handled when any connection takes place. In other words, a document linking to other documents will need to wake up and communicate with another machine with the other document loaded, the connection taking care of any differences in meaning. This is also the way we see of integrating inputs from different specialties, where the vocabulary of one specialty is foreign to the other – as example, economists and epidemiologists.

Where the intention is to develop strategy in English for detecting fraud or money laundering, this would be done on one machine, and then a fleet of machines deployed using the same instructions to look for instances in a large database of transactions. As an aside, Google “reads” every new document that appears on the Internet, a process that is very slow. To get around this, Google deploys millions of computers on the task. I am not suggesting millions, but a thousand computers come fairly cheap, and can easily switch from one task which requires subtlety and self-awareness of what it is doing, to another.


Where the machine is tasked with integrating many different and complex models, Climate Change as an example, this would be done with a fleet of machines, each managing an area – say meteorology, oceanography, agriculture, emissions, resilience – and communicating with an integration model, which communicates with each of them in their specific technical jargon, and rolls up the result into a more easily understood description.