Thursday, October 15, 2020

Man vs. Machine

Decades ago, in the 1960s, the Oregon Research Institute decided to create a simple algorithm – one that judges the likelihood of an ulcer being malignant by considering just seven equally weighted factors. To build it, the researchers consulted doctors and asked them to judge the probability of cancer in 96 different cases of stomach ulcers, mixing up X-ray slides and sometimes showing them the same ulcer twice. This was the input on which the algorithm was to be based on - expert judgement using a fairly small data set (by today's standards), and then cleaning it for human errors. The model was supposed to be a starting point. Nothing groundbreaking. The results, however, shocked everyone.

The doctors, whose inputs were used to build the algorithm, had often contradicted themselves when looking at the same X-ray. Even though this sounds absurd, human biases and memory shortcomings works in strange ways. The model, on the other hand, beat even the single best doctor. Do remember that the model had just incorporated seven equally weighted factors. A real-life doctor might consider many more, and assign unequal weights depending on the circumstance. Despite such constraints, a back of the envelope algorithm was good enough to outdo expert judgment.

Like us, machines also use historical data, or inductive reasoning, to arrive at a judgement in such cases. However, unlike us, they do not suffer from lapses of memory, or psychological biases. Also, in some cases, they have access to much more data. Even the celebrated Nobel Prize winning chemist, Linus Pauling, committed a basic blunder when he tried to arrive at the structure of a DNA. If he had access to a computer, with the data and constraints known to Pauling being plugged in, the computer would have raised a red flag on what he proposed - a triple helix in which the phosphates were held together by a hydrogen bond. Ironically, Watson and Crick confirmed the error by referring to Pauling's classic "College Chemistry" textbook. Importantly, it is not that a computer with sufficient AI would have helped Pauling solve the DNA problem, because he did not have access to X-ray crystallography that Watson, Crick, Wilkins, and Franklin knew about, which was so crucial to solve the puzzle. However, a computer would have prevented him to make a blunder, and sometimes, we are as good as our biggest blunders.

With the kind of computing power and big data we have now, the algorithms will only keep getting better. The more data points computers observe, the smarter they become. Not surprisingly, in 2017, Stanford researchers developed an algorithm that can diagnose pneumonia better than expert radiologists. Notably, AI has made these impressive strides in the field of medical research, where doctors spend years to build expertise. How will it impact other not so complex domains is a fairly simple conclusion.

Friday, October 02, 2020

Science fiction authors need to rethink their plots

The period in the 1940s and 1950s is widely considered the golden age of science fiction. Some extend it to the 1960s as well, when man's landing on the moon further fueled popular imagination. The belief that space travel to the depths of the Universe was within reach led to more space stories, with people toying with the possibility of inhabiting other worlds and crowded space stations. When Arthur Clarke penned 2001: A Space Odyssey in 1968, he foresaw man travelling to Jupiter by the beginning of the next millennium. 

The authors of that age also talked about flying cars, transportation belts, and affordable video communication. While the latter did become a reality, the other two will perhaps never see the light of day. The way science fiction has changed over the years led one writer to comment that while science fiction of the golden era had a firm faith in scientific progress for the betterment of humankind, contemporary literature is largely pessimistic and talks about a dystopian future. The themes now are of worlds submerged in rising water levels, humans taking refuge from climate change in subterranean caverns, or artificial intelligence gone rogue. Even when the books are about alien life, it is mostly hostile and destructive. The COVID-19 outbreak will perhaps revive the genre of lab designed viruses as well. A valid argument put forward by authors is that the purpose of science fiction is to imagine possible scenarios far away in the future, and prevent the nightmarish ones from playing out. However, just as hope is a dangerous thing, so is despair.

Some ideas of science fiction, like flying cars and power beams, were preposterous from the very start, and abandoning them is no great setback. The kind of energy needed to sustain a car in the air, managing the traffic, and the associated cost-benefit made it far fetched to start with. Perhaps the same could be said of moving transportation belts that Asimov was so fond of. We needed too much cheap and sustainable energy that we could throw at these ideas. We are moving in that direction, but even technology like hydrogen fuel cells is not going to make those ideas a reality.

Perhaps somewhere down the line we abandoned hope of space travel as well. As space programs started going slow after the Cold War came to an end, enthusiasm about space travel receded. Now even movies are rarely based on intergalactic travel using spaceships like the Enterprise, despite the special effects being available. The new buzzword is Artificial Intelligence, but it brings up images of rogue robots, an evil Skynet, and mass unemployment. Asimov, who came up with the three laws of Robotics, had a kinder view of things to come. While even he talked about the hostility that humans are likely to have towards robots, no matter how benign they are, the overall impact was favourable. He envisaged a world where humans and robots will co-exist and work as a team, even solving crimes together as detectives.

All said and done, science fiction authors need to take a more cheerful outlook. I find the space travel novels charming, and if you ignore the low production value, Star Trek, made in 1966, is quite creative. One of my favourite authors, Jules Verne, almost always considers science as an ally, not as a force that will lead to destruction unless we are extremely cautious. His books talk about curious scientists who want to harness the power of science for the greater good. HG Wells is an exception, who despite being on the other end of the spectrum, is still a joy to read. But I digress.

While authors might have a small role to play, they can help in shaping science for the future. Dystopian novels spark fear in the populace, making it difficult to secure political funding for projects. People do not want to invite hostile aliens, or encourage the development of autocratic robots, or tools that can lead to surveillance. Genetic alterations are almost always associated with the creation of monsters or deadly pathogens. Even particle accelerators are supposed to open up dimensions from which hideous beings will enter our world. To put it crudely, this is plain fear mongering. Scientists can spend their time developing the next version of smartphones or quantum computing, but pardon the word usage, this will not lead to quantum leaps in progress. We need to encourage them to take braver bets. Most science fiction literature, in its current form, is being a roadblock to that.