Since the dawn of computer programming, software developers have been aware of the rapidly growing complexity of code as its size increases.
Since the dawn of computer programming, software developers have been aware of the rapidly growing complexity of code as its size increases.
My most recent paper submission (preprint available) is about improving the verifiability of computer-aided research, and contains many references to the related subject of reproducibility. A reviewer asked the same question about all these references: isn't this the same as for experiments done with lab equipment? Is software worse? I think the answers are of general interest, so here they are.
A recent article in "The Atlantic" has been the subject of many comments in my Twittersphere. It's about scientific communication in the age of computer-aided research, which requires communicating computations (i.e. code, data, and results) in addition to the traditional narrative of a paper.
It is widely recognized by now that software is an important ingredient to modern scientific research. If we want to check that published results are valid, and if we want to build on our colleagues' published work, we must have access to the software and data that were used in the computations.
Data science is usually considered a very recent invention, made possible by electronic computing and communication technologies. Some consider it the fourth paradigm of science, suggesting that it came after three other paradigms, though the whole idea of distinct paradigms remains controversial. What I want to point out in this post is that the principles of data science are much older than most of today's practitioners imagine.
The plea for stability in the SciPy ecosystem that I posted last week on this blog has generated a lot of feedback, both as comments and in a lengthy Twitter thread. For the benefit of people discovering it late, here is a summary of the main arguments and my reply to them.
Two NumPy-related news items appeared on my Twitter feed yesterday, just a few days after I had accidentally started a somewhat heated debate myself concerning the poor reproducibility of Python-based computer-aided research.
It's hard to find an aspect of modern life that is not influenced in some way by software. Some of it is very visible, for example the Web browser I start on my computer. Other software is completely invisible, such as the software controlling my car's diesel engine. Some software is safety critical, for example flight control software in airplanes. Other software is used in a much more futile way, such as playing games.
A few days ago, I noticed this tweet in my timeline: That sounded like a good read for the weekend, which it was. The main argument the author makes is that C remains unsurpassed as a system integration language, because it permits interfacing with "alien" code, i.e. code written independently and perhaps even in different languages, down to assembly.
Over the last few years, I have repeated a little experiment: Have two scientists, or two teams of scientists, write code for the same task, described in plain English as it would appear in a paper, and then compare the results produced by the two programs. Each person/team was asked to do a maximum amount of verification and testing before comparing to the other person's/team's work.
A few years ago, I decided to adopt the practices of reproducible research as far as possible within the technical and social constraints I have to live with. So how reproducible is my published code over time? The example I have chosen for this reproducibility study is a 2013 paper about computing diffusion coefficients from molecular simulations. All code and data has been published as an ActivePaper on figshare.