Dr John Yardley, Managing Director of Threads Software Ltd, outlines why the fields of computing and mathematics are so closely linked
Whether we admit it or not, our brains are computers. When we humans solve mathematical problems, we do it algorithmically – as a sequence of steps that follow logically from one another – although we may not be generally aware of what those steps are.
Algorithms are fundamental to any sort of problem-solving, not just mathematical problems. Algorithms form the basis of computing and it is important to understand what is computable and what is not. Learning how to programme a computer is an extremely useful skill that can be applied to many areas of work and life.
Information theory is a very important aspect of computer science. It tells us things like how much data is relevant and how we can compress it without losing information. Information theory was developed using many advanced mathematical concepts. Modern computers rely on the whole spectrum of physical phenomena including quantum effects. Without mathematics, we would be unable to analyse or use these phenomena.
As computer resources get cheaper, smaller and faster, neural networks will solve a raft of difficult problems. However, although this may give us a cheaper and quicker solution, we may lose sight of the real problem. Sometimes, we end up with a complex solution when a simple one is under our noses.
By deconstructing a problem, you eventually get to a point where there is something that most people can understand. Many people are not trained to break the task down into components. By assuming that things are too complicated for the average person, we miss out on a massive amount of problem-solving potential.
For example, understanding human speech is a hugely complex task for a computer. To decide what can be done using technology, we need to understand how human speech works. This requires study of acoustics, linguistics, grammar and other complex disciplines – but each less daunting than the overall problem. Eventually, we may discover that if we can detect fricative sounds (“shh”, “fff”), say, we get clues to some of the words spoken. Making a piece of electronics to detect fricatives is not complex. It solves the problem, and also gives us valuable insights into how human speech is constructed.
Copying how the human brain works
Today, neural networks strive to copy how the human brain works, by “learning” how humans speak. Given enough samples, they can recognise speech as well as the devices designed using all our knowledge of speech.
This software knows nothing of linguistics or acoustics. It is akin to a child learning to speak. Although it may have inferred the word “six” is made up of two fricatives, a short vowel sound and a glottal stop, the computer won’t be able to tell us. This is fine until, inevitably, the computer tries to recognise a word that a human would but was not in the training set. If the computer was taught the rules of speech rather than having to learn them from scratch from samples, it would probably get it wrong less. The best algorithms use a combination of self-learning and being taught. To teach the computers, we need to understand what is going on first. And it’s not that complex if you break it down enough.
So how have we got to where we are?
The earliest computers, huge, expensive and slow, performed one task at a time, but with the advent of 2operating systems, a computer could perform several tasks simultaneously – a great leap forward.
Computers talk to one another
The next major development was to allow computers to talk to one another, allowing information exchange so that individual computers could specialise in certain tasks such as distributing email and files. This is called client-server. By this time, the cost of computers had reduced, so it was feasible for a small company to own several and join them together in a network.
We already had telephone networks and power distribution networks. The innovation was the data network.
Because the first data networks required dedicated connections between computers, the networks tended to be contained within a single organisation or building in Local Area Networks (LANs). However, engineers soon found a way of transmitting data over existing telephone networks, so next came Wide Area Networks (WANs) which connected geographically dispersed local area networks – the preserve of large corporates, government and academia.
That all changed with the Internet, born originally out of military work done in the UK and U.S. The Internet is just a set of rules by which computers can communicate via Internet service providers (ISPs) – who, like telephone exchanges, route data rather than (analogue) speech. Sometimes, they utilised existing telephone networks by converting data into speech-like signals through a modem.
Then came the World Wide Web (WWW). Like the Internet, it is a set of rules that allow computers to communicate and display information to a human user. A programme called a web browser can display information on any computer – Mac or PC – to interact with the web application.
Later, as telephone networks were upgraded, they became both fast and reliable, and together, met the last conditions necessary to give rise to The Cloud. To put this in context, early Internet connections (in the 1980s) ran at 100 bits per second. Nowadays, 50,000,000 bits per second is not uncommon for even domestic users.
Applications that previously had to run locally could now run from remote sites anywhere.
It is very important that students of mathematics and computer science understand the basics, as technology has a way of reinventing itself. We should not stop learning long division just because we have calculators. Although we may never use certain methods, knowledge of how they evolved often helps us solve analogous problems.