To people asking, "why do they use Excel?" that's like asking "why must we be subjected to gravity?"
The whole world's data ultimately comes from or ends up in an Excel spreadsheet. Sure, we might use other intermediate data storage methods, but in the end it's going to go into some scientist's or some politician's computer, and by golly it's gonna be in Excel. Trying to rally against Excel is like trying to rally against fundamental forces of nature.
This is just an example of that fundamental force winning.
I don't disagree completely with this, but just want to point out that it's kind of a bad smell to have computational biologists who are - as someone in the article puts it - computationally illiterate. I have met lots of these types over the years, and usually their methods are kind of a gong show. If you can't properly sanitize your data inputs on your column headers, why should I trust that you've treated the rest of your data properly?
I have a strong feeling that, if people really put an effort into reading and replicating more papers, we would find that a lot of what's being published is simply meaningless.
In grad school I had a subletting roommate for a while who was writing code to match some experimental data with a model. He showed me his model. It was quite literally making random combinations of various trigonometric functions, absolute value, logarithms, polynomials, exponents, etc. into equations that were like a whole page long and just wiggling them around. He was convinced that he was on a path to a revolution in understanding the functional form of his (biological) data, and I believe his research PI was onboard.
I guess "overfitted" never made it into the curriculum.
> It was quite literally making random combinations of various trigonometric functions, absolute value, logarithms, polynomials, exponents, etc. into equations that were like a whole page long and just wiggling them around.
Technically, we call that a "neural network". Or "AI".
I work in computational materials science (where ML brings funding) and a funny paper of this kind is here: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.11... - they are literally trying out 100000s of possible combinations by brute force, to build a "physical model".
Then they go on conferences and brag about it, because they have to (otr they know it's bs).
Datasets are soso (you can have a look at QM9...) and for more specialized things, people generally don't bother trying to benchmark or compare their results on a common reference. It's just something new...
And with all that: even without doing fancy statistical methods without knowing too much about it, your theoretical computations might not make so much sense (at least in the sheer number which is pumped out and published)...
Oh, I thought "on the real" fit the context better, meaning they knew in their heart of hearts it was bullshit, but "off the record" is about the same.
> I have a strong feeling that, if people really put an effort into reading and replicating more papers, we would find that a lot of what's being published is simply meaningless.
People have figured that out long ago [1] (I know the author of that paper lately turned somewhat controversial, but that doesn't change his findings). It's not very widely known in the general public. But if you understand some basic issues like p-hacking and publication bias and combine that with the knowledge that most scientific fields don't do anything about these issues, there can hardly be any doubt that a lot of research is rubbish.
Yeah, but one would hope that science has a higher standard. 80% garbage results in science sounds catastrophic to our understanding of the world, and in particular when it comes to making policies based on that science.
There's the saying "science advances one funeral at at time."
'‘A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ This principle was famously laid out by German theoretical physicist Max Planck in 1950 and it turns out that he was right, according to a new study.'
Also the story of Ignaz Semmelweis who discovered that if doctors washed their hands it reduced deaths during childbirth - but for a variety of reasons his findings were resisted.
In grad school I had a friend that was doing olfactory (smell) research on rats with tetrode drives (wires in their brain). He was looking at the neuronal response to smells that they gave the rats and had a few signals to match up. There was the signal from the arduino running the scent gates, the amps that reported the weak neuronal currents, the nose lasers that gated the ardunio, etc. He was having a hard time getting through all the data in his MatLab code and I offered to help for some beer.
After the 11th nested 'if' statement, I upped the request to a case of beer. I'm not certain he ever got the code working.
To the larger point, scientists are not programmers. They got into their programs to do research. What keeps them going is not the joy of programming, but the thrill of discovery. Programming is nothing but a means to an end. One they will do the bare minimum to get working. Asking hyper stressed out grad students to also become expert coders isn't reasonable.
And yes, that means that the code is suspect at best. If you load the code on to another computer, make sure you can defenestrate that computer with ease, do not use your home device.
I keep seeing this sentiment when it comes to those in the natural sciences, but it makes no sense.
I could replace "programming" in your above little bit with "mathematics" and it would be just as weird.
Our modern world runs on computers and programs, just as our modern world and modern science built itself on mathematics and required many to use it. So too the new world of science may require everyone to know to program just as they know about the chemical composition of smells, or the particulars of differential equations, etc.
And I know your argument isn't "they shouldn't learn programming", but honestly since I keep seeing this same line of reasoning, I can't help but feel that is ultimately the real reasoning being espoused.
Science is getting harder, and its requirements to competently "find the exciting things" raises the bar each time. I don't see this as a bad thing. To the contrary, it means we are getting to more and more interesting and in-depth discoveries that require more than one discipline and specialty, which ultimately means more cross-functional science that has larger and deeper impacts.
Again: these are tools that are means to an end. They only need to work well enough to get the researcher to that end.
A lot of what are considered essential practices by expert programmers are conventions centered around long-term productivity in programming. You can get a right answer out of a computer without following those conventions. Lots of people did back in the day before these conventions were created.
That's not to say that everybody with horrible code is getting the right answers out of it. I'm sure many people are screwing up! My point is just that ugly code does not automatically produce wrong answers just because it is ugly.
By analogy, I'm sure any carpenter would be horrified at how I built my kayak rack. But it's been holding up kayaks for 10 years and really, that's all it needs to do.
I will add that in general, statistical analysis of data is not by itself adequate for scientific theory--no matter how sophisticated the software is. You need explanatory causal mechanisms as well, which are discovered by humans through experimentation and analysis.
And you can do science very well with just the latter. Every grand scientific theory we have available to us today was created without good programming ability, or really the use of computers at all. Many were created using minimal math, for example evolution by natural selection, or plate tectonics. Even in physics, Einstein came up with relativity first, and only then went and learned the math to describe it.
Your point is maybe a little obtuse to me, because it sounds like you are arguing for "computers are tools that should be learned, but really no one does and who can blame them, they just want to science" and simultaneously arguing, "tools aren't science, and science can be done without them".
I feel like the later is obvious: of course the tools aren't science, but if you want to do real work and real science, your tools are going to be crucial for establishing measurements, repeatability, and sharing how one models their hypothesis onto real world mechanics.
Likewise, the former is just the same commonly repeated thing I just argued against and my reply is the same: so what? You building a kayak is not science and is irrelevant.
Scientists can't reach a meaningful conclusion without proper use of tools. All they can do is hypthesize, which is certainly a portion of science (and many fields are in fact stuck in this exact stage, unable to get further and come to grounded conclusions), but it is not the end-all of science, and getting to the end in the modern day science means knowing to program.
Of course there are exceptions and limitations and "good enough". No one is arguing that. The argument I am refuting is those who think "tools are just tools, who cares, I just want my science". That is the poor attitude that makes no sense to me.
> Scientists can't reach a meaningful conclusion without proper use of tools.
I'm just trying to make the point that "proper" is subjective. Software developers evaluate the quality of code according to how well it adheres to well-established coding practices, but those practices were established to address long-term issues like maintainability and security, not whether the software produces the right answer.
You can get the right answer out of software even if the code is ugly and hacky, and for a lot of scientific research, the answer is all that matters.
The usual reason programmers object to ugly, hacky code is that it's a lot harder to be justifiably confident that such code actually does produce the right answer -- "garbage in, garbage out" is just as true in function position as it is in argument position.
Tbh I think its a case for multidisciplinary research. You wouldn’t only hire one skill set to run a company, even a tech one, so why should research be any different? That’s probably where the deep insights are.
People that are just decent programmers can make at least twice (probable 3 or 4 times) as much money working for industry than for science in an academic environment. Most programmers that would work for less money because they are interested in science will be more interested in computer programming problems than basic programming to support a scientist. NSF won't give you $250k to hire a really good programmer to support your gene analysis project. More like 100k if you are lucky.
So what you end up with are that great scientists that are decent programmers are the ones who can do the cutting edge science at the moment.
Think of the flip side: Programmers are terrible biologists.
Sure, it would be great if we all had more time to learn how to code. Coding is important. But I'd say the onus should be on coders to build better tools and documentation so they are empowering people to do something other than code, rather than reduce everything to a coding exercise because making everything look like code means less boring documentation and UX work for coders.
I mean, biology is in fact a full on degree program and you pretty much need a PhD before you're defining an original research topic. It's not because biologists are dumber and learn slower. It's that biology is complicated and poorly understood, and it takes years to learn.
Contrast this to coding... you don't even need to go to college to launch a successful software product, and the average person can became proficient after a few years of dedicated study. However, this is a few years that biologists don't have, as their PhDs are already some of the longest time-wise to finish.
The decision to rename genomes is totally consistent with the biologists MO: if a cell won't grow in a given set of conditions, change the conditions. Sure we can CRISPR edit the genes to modify a cell to to grow in a set of conditions, but if it's usually far easier to just change the temperature or growth media than to edit a cell's DNA.
My take away is that this is more a failure of programmers and/or a failure of their managers to guide the programmers to make tools for biologists, than of biologists to learn programming. Sure, coders get paid more, but they aren't going to cure cancer or make a vaccine for covid-19 without a biologist somewhere in the equation. And I'm glad the biologists developing vaccines today are doing biology, and not held up in their degree programs learning how to code!
MatLab has taken over bio specifically because it has great documentation and examples. If Python was psuedo-code that compiles, then MatLab is just speaking English. Even still, the spaghetti that researchers get into is just insane.
> To the larger point, scientists are not programmers. They got into their programs to do research.
I would say most research, to an ever growing degree, is so heavily dependent on software that it's tough to make that claim anymore. It makes no sense to me. It's like saying Zillow doesn't need software engineers because they are in the Real Estate business, not the software business.
I maybe misspoke. I meant that scientists do not go into science to program, they go into it to discover and do research (among many many other things). Sure, some do find joy in good programming, but that's not why they are there to begin with. Becoming a better programmer isn't their passion, and those skills remain underdeveloped as a result.
> To the larger point, scientists are not programmers.
I mean, sort of. Some research is essentially just programming; other research can get by with nothing but excel. Regardless, it's unreasonable to ask most scientists to be expert programmers -- most aren't building libraries that need to be maintained for years. If they do code, they're usually just writing one-shot programs to solve a single problem, and nobody else is likely to look at that code anyway.
There are lots of great computational biologists, but being a computational biologist doesn't necessitate being good with computers. Plenty of PI's rely pretty much exclusively on grad students and post-docs to run all their analyses.
Not that I'm saying using excel is bad either. I use excel plenty to look at data. But scientists need to know how to use the tools that they have.
If people are just looking at the spreadsheets then wouldn’t the cells interpreted as dates not be a problem? It seems like it would only be a problem if you’re doing computation on the cells.
It's also my experience of research in biological sciences that it is a widespread belief/fact that in order to get published in a top journal, the analysis methods must be "fancy", for example involving sophisticated statistical techniques. I worked on computational statistical methods so I'm not against that per se, but the problem is that if you have the training to contribute at the research front of an area of biology you rarely have the training to understand the statistics. Some would say that the collaborative publication model is the solution to that, but in many cases the end result isn't what one would hope for. I do think that less emphasis on "fancy" statistics, and more emphasis on simple data visualizations and common sense analyses would be a good thing.
I'm an ex-computation biologist who did most of his work in python but periodically had to interop with excel.
THe basic assumption I have is that when I input data into a system, it will not translate things, expecially according to ad-hoc rules from another domain, unless I explicitly ask it to do so.
It's not clear what data input sanitization would mean in this case; date support like this in Excel is deeply embedded in the product and nobody reads the documentation of Excel to learn how it works.
it would be nice if everyone was expert at everything, but they cant be. it would be nice if they hired experts but money doesn’t grow on trees. we often insist on a degree of excellence we refuse to pay for
It's not about being an expert at everything or hiring more people. These aren't particularly hard problems, it's not difficult to find biologists who are incredibly adept at using python, R or C. It's about thinking about how science gets funded and how it gets implemented. I've written here before about the difference between "grant work" and "grunt work", and how too computer touching tends to get looked down upon at a certain level.
If you're deciding who gets a large-scale computational biology grant, and you're choosing between a senior researcher with 5000 publications with a broad scope, and a more junior researcher with 500 publications and a more compuationally focused scope, most committees choose the senior researcher. However, the senior researcher might not know anything about computers, or they may have been trained in the 70's or 80's where the problems of computing were fundamentally different.
So you get someone leading a multi-million dollar project who fundamentally knows nothing about the methods of that project. They don't know how to scope things, how to get past roadblocks, who to hire, etc.
What's your source on it not being difficult to find biologists who are adept at using python, R, or C? Most biologists operating in private industry or academia have many years of training in their fields and many have learned their computational tools as they've gone on, meaning they've never received proper training. It seems dubious to claim that there's this neverending source of well trained biologists who are also adept at programming.
I would say the number of biologists who actually understand programming is extremely small. I've been programming for fun for ~15 years, and I'm about to finish a PhD in chemical biology (e.g. I started programming in C far before I started learning biology).
You might occasionally run into someone who is passable - at best - with R or Python. But most of the code they might write is going to be extremely linear, and I doubt they understand software architecture or control flow at all.
I don't know any biologists who program for fun like me (currently writing a compiler in Rust).
To be fair, linear code is often totally sufficient for most types of data analysis. Biologists don't really need to understand design patterns or polymorphism, they just need to not make computational mistakes when transforming the data.
Absolutely. My point was more than you can't expect comp. biologists to actually be "good" programmers when compared to SWE or even web devs.
Most of the code I write to do biological data analysis is fairly linear. However, I also generally use a static type system and modularity to help ensure correctness.
I've perused a lot of code written by scientists, and they could certainly learn to use functions, descriptively name variables, use type systems and just aspire to write better code. I just saw a paper published in Science had to issue a revision because they found a bug in their analysis code after publication that changed most of their downstream analysis.
It does get rather problematic when you have large quantities of...stuff. You can't run linear stuff in parallel so now you're bound to whatever single CPU core you have lying around.
I'd say that getting some basic data science computing skills should be more important than the silly SPSS courses they hand out. Once you have at least baseline Jupyter (or Databricks) skills you suddenly have the possibility to do actual high performance work instead of grinding for gruntwork. But at that point the question becomes: do the people involved even want that.
I write 'one off' programs all the time. Most of what I write I throw away, and I program for a living. Those are usually fairly linear. Which is fine. If I am writing something that will be re-used in 6 different ways and on a 5 person team. That is when you get out the programming methodologies. It is usually fairly obvious to me when you need to do it. For someone who does not do it all the time. They may not know 'hey stop you have a code mess'.
It one of the reasons why people end up with spreadsheets. Most of their data is giant tables of data. Excel does very well at that. It has a built in programming language that is not great but not totally terrible either. Sometimes all you need is a graph of a particular type. Paste the data in, highlight what you want, use the built in graph tools. No real coding needed. It is also a tool that is easy to mismanage if you do not know the quirks of its math.
It doesn't take being an expert at Excel to understand how Excel autoformats. It takes a few days of actually working with data or an introductory class that's today taught in American primary schools.
Sorry for asking but are you familiar with how MS Excel aggressively converts data to dates? There's no way to "sanitize" it (without resorting to hacky solutions like including extra characters) and even if you fix the data, it will re-change them to dates the next time you open the file.
I'm only familiar with LibreOffice and not Excel myself, but: if you want to be sure a column is treated as text in a CSV file, you have LO quote text fields on save, and have it treat quoted fields as text on import. I assume Excel must have similar options.
For the most part we aren't talking about computational biologists but experimentalists using Excel. People at the bench need to collect their data somehow, and using Excel for tabular data and Word for text data is just what they know. Typically they then pass these files over to computational biologists for analysis. Yes, it would be nice if they would use more appropriate tools, but I know from experience that the typical result of trying to teach them better tools is the experimentalists just rolling their eyes and saying that they don't have time to learn some nerdy program because they have experiments to run.
Considering how Perl was chosen as the computational biologists lingua franca in 1990's - 2k's since it was good at text manipulation (since genes are represented by text) I would say they don't have a history of making good choices.
> " To people calling, "why do they use Excel?" that's like asking "why must we be subjected to gravity?""
I respectfully disagree with this. Excel is fundamentally not suited to analysing *omics data. It's often the default program affiliated with a .csv filetype on people's computers, but trying to get an entire field of scientific research to rewrite itself based on its glorified bugs is...wrong, in my opinion.
If you see wrong things in the world, do you accept them as they are, or try -- however ineffectually -- to force change for the better? I for one bang the drum into the wind and try to get biochemists off it. I teach people to be very sceptical of excel in my stats courses, for example (aside from some showstopping bugs and downright dangerous defaults, its RNG is particularly crap).
Excel is not fundamentally suited, but it's fundamentally used.
I wish you luck, but I long ago declared defeat against Excel. I can't prevent everyone from using it. I have come to accept that whatever Excel does is what I have to live with, because someone, somewhere out there hasn't heard the anti-Excel gospel yet.
I'm actually having a lot of scenarios where Excel is banned except for some managers, C-level and board type stuff.
If you produce, touch or directly act on data, Excel is no longer allowed. On the other levels the people that still use it (because they think they have the skill - which they often don't) can't do meaningful direct-action on those files anyway so that solves part of the problem of acting on bad data or bad data processing.
We do allow web-based spreadsheets, that has the benefit of limiting what you can do and drives people to either put a request in with the correct team or take internal courses on how to do it right.
Some people hate it, but then again, some people hate Excel. The difference is that the people that don't hate it now have an empowered skillset that they didn't have before. In theory you could do the same with Excel-oriented internal courses, but we didn't want to do Excel because a lot of us think it's stupid (often simply for emotional reasons but the push was based on hard data like process metrics and user input).
To add to this: it's probably because I find myself attracted to data-driven organisations where using Excel didn't help the workforce much anyway, and the positions I'm in allow for a certain degree of influence on the tools that we support and how much explicit allow/deny we apply on them.
The difference is that the people that don't hate it now have an empowered skillset that they didn't have before.
Not knowing how to use the primary tool in your field because you only know how to use a web-based spreadsheet with 1/100th the functionality is not an "empowered skillset." It's a cruel trick to play on employees, who will have difficulty finding further jobs in that field because they don't have a basic understanding of the tool everyone else uses.
I was talking about Apache Spark via Databricks. ~220 of the people that were attached to Excel and PowerBI switched over for their calculations and data processing, 3 remained. One was let go, one didn't actually use Excel to write data, only to see it and search it, and the third wasn't able to learn the replacement so they got to keep Excel.
It's not a cruel trick and even if it was: web-based spreadsheets might not have the same functionality, but the people that were using the desktop-based spreadsheets weren't doing much more than basic computation on fields and search/replace. That works fine on the web.
It's not like we took away their toys and gave them dirt instead. We just made everyone take the startup course to get them acquainted and if that went well we would decide our next steps. It went well, and the next steps became removing Excel from the standard workflows.
Quantitative social science (which maybe a lot of comp bio people might regard as a "softer" science) has really made huge strides in moving this kind of analysis to R. Of course one person can't change it, but those with high status in the field can make a difference by leading the way and setting standards.
But they don't, and that's a problem. It's an education problem: everybody automatically defaults to Excel, even when it's not the proper tool. People need to learn about better tools.
You can't force people to use a different tool that they don't want to use. They [(computational) biologists] know other tools pretty well, they don't want to use them. If you want to help, work on better tools for them, based on their needs. Regulation is about the worst thing you could've thought of. If you said this to my gf's research group, they would yell at you, Excel is a loved piece of software among them, and other spreadsheet applications are not even close. They don't want to have a dozen different tools that they have to switch between fifty times an hour, converting data in the meantime, they want to know one or two very well and have the data all there. Excel is a data swiss army knife, which is exactly what they need, it has some warts but forcing them off Excel really is not the solution.
Where am I talking about forcing them? You're the only one bringing that up, and I strongly disagree with it. Education is not force, it's empowering.
The only reason they want to use a tool that doesn't really fit their use case, is because they're not aware of better tools. I find it very hard to believe that scientists actually want to use a tool that corrupts their data. If that is true, then that is absolutely a problem with their attitude towards science and data.
It's not that hard to imagine a tool that can do exactly what Excel can, but without corrupting your data. It might even exist already. LibreOffice got mentioned a lot; it can do almost(?) everything Excel can, but without corrupting your data. If there are problems with it that make it useless to scientists, there's a good chance the LibreOffice community can fix them.
I think better tools that preserve the integrity of their data are absolutely the solution here.
You're taking a minor, very unusual issue that happens only to a subset of a subset of a subset of users and resolving it with regulation, while forgeting all the cases where Excel works perfectly for them. Libreoffice is good, but it has much more warts than Excel does and the presentantion capabilities (a key functionality) simply are not as good.
Are you seriously opposed to education because you see it as regulation?
Also, I don't think it's a very minor issue if the tool you use to process your data, changes your data. It may only do that in limited cases, but the fact that it does it at all should alarm anyone who cares about data integrity.
> This is the exact sort of thing that indicates market-power being concentrated to a point requiring severe, acute regulatory action.
This is what I react to. Sorry I missed that you're not the original commenter, that's my bad.
I am not sure though what other tool should the biologists learn - they usually know R and Python and they use Excel because they need what Excel offers; there isn't any other software that allows them to easily do whatever they need to do with the data, quickly iterating on the ideas AND then present it nicely.
The solution here should be to fix Excel, but it really is a small issue (that has been worked around, too; and if you know how to use excel correctly, it does not happen to you) compared to making another Excel, which seems like a monstrous task.
Yeah, that wasn't me, and that comment was already countered by pointing out there was plenty of competition in the spreadsheet market. My reaction was that a lot of people may not be aware of all the options and simply default to Excel because that's what they know.
You're claiming that they do know the alternatives but the alternatives fail in worse ways than Excel does.
I don't know what the magic sauce is that makes Excel so much better than the alternatives, or why it would be such a monstrous task to replicate that. My impression is just that to many people, Excel is simply the default tool to enter data in, no matter what the actual problems with Excel are. Excel is like gravity; good or bad, you put up with it because it is what it is. And if that's the case, that'd be pretty bad and in need of change, because Excel is not at all like gravity; it's merely one tool of many. If there's a more appropriate tool, people should use that. If there isn't, one could be developed.
I stand by my point that a standard data collection tool that modifies the data you put into it is a really bad idea for any field where accuracy and data integrity is important, and I'd expect science to be one of those fields. You may be able to work around the limitations of the tool, but the risk is still there.
Ah yes. Without acute regulatory action, Lotus 1-2-3 would have never been defeated, the PalmPilot would have reigned supreme as a smartphone device to this day, and Starbucks would have taken over all coffee production.
Having Excel be the default for .csv files has caused me problems in the past as well with Excel automatically changing data when you open it. The unfortunate thing I have come across multiple times is a csv file that has some values with a lot of significant figures, i.e.
123456789876, 987654321234
After opening in Excel it will convert these to:
1.23E+11, 9.87E+11
Then resaving the file (as a csv) will store the values in scientific notation and lose all of the digits.
This issue is common in the oil and gas industry. Wells are typically identified with a 14-digit number that Excel likes to convert to scientific notation and then truncate digits when saving to CSV.
That's the problem of having Excel open CSV files by default - if you use Data//From Text/CSV you get a chance to change the import process to make those columns text rather than Excel guessing they are numbers.
Same here, but with IMEI numbers. Excel silently truncates them, it’s frustrating. Why can’t the auto format detect and just store them in text format?
I was thinking that perhaps the autoformat options could be explicitly shown in the column headers
number date currency
A B C
and you could click on the format to change it. Italics could show assumed formats, bold could show user set formats.
I'm newly returned to (very minor) Excel use and assumed formats with no feedback seems wrong. My situation is dates _not_ being picked up as dates and it not being obvious that it hasn't; I'm using ISO format (eg 2020-06-17), I guess you have to create a new date format or use USA-ordering before it will recognise dates.
I would be happy if Excel supported some way to specify the column type in the CSV itself. Ie first row headers, second row data type/hint, subsequent rows the actual data.
At the moment users need to remember every time they import the CSV to specify A data type for each incorrectly handled column. Suffice to say they don’t always remember... and with large CSVs with hundreds of columns it’s just a pain.
What might be better is what I'll call “CSV Schema”: support for paired CSV files where one normal CSV has data, and the other is a schema file which contains headers which match the data files headers with a standard extra first header prepends (call it “meta-name”), and the data rows each provided metadata for the corresponding column in the data file. The most common (and often the only) metadata row might be for type information, but the format could support additional metadata.
This would allow existing CSV tools that work with data files to continue to work without modification, while CSV Schema aware tools could make use of the schema.
This is what happens on explicit import, but, formatting is per-cell, not per-column. "General" is the format for "idk, guess" - it stays General after a transform, it's not converted.
Because most people want to import numbers as numbers most of the time. Excel detects numbers and treats them as numbers. You do have the option to import them as text. So, Excel (as usual) does what's the best for most people.
The reasonable default should be not to change existing data. If the user wants to, by all means do it, but not before. If you want to be smart about it, offer a popup asking if a particular column or field should be interpreted as a number or date, but by default, keep everything a string.
Why does excel truncate a very long number silently? That is what happens with IMEI numbers. It cuts off the last 4-6 digits. It’s destroying numbers.
Excel should know any number that is longer than it can internally support should be stored as text. Or they need to introduce a data type that can store arbitrary precision numbers.
It's not that Excel can't support that long of a number in general. When you open up a csv in Excel, it is not being worked on in "Excel format" where it is displaying the number in scientific notation but it is still retaining the full number. If you click on the cell in the text window in the top it shows you the full number.
The asinine thing happens when you save it back as a csv. Excel doesn't think to itself "hey, I should write these full values I have stored in each cell to the csv file". It just writes whatever is currently displayed to the csv file instead.
Yes, and? It should be a simple checklist item for any scientific journal to ask the author whether they used excel or not. If they say yes, reject the paper unless they can show their work hasn't been affected negatively by using excel.
On the practical side, how would people show that their work hasn't been affected negatively by using Excel? How would the journals evaluate that? I suspect this would just become another box to check - yes, we use Excel, yes, we checked the results, like the last 50 times.
Intellectually, it feels snobbish to single out Excel like this. I'm a software engineer in science, and I generally agree that scientists should learn some kind of coding. But you can make mistakes in Python or R as well - not to mention in physical experiments. We should check data and methods in general, not presume incompetence if people use one tool.
By providing a repo with raw data, and the code that runs on it that eventually produces the results that are in the manuscript. Anything else is just a bunch of handwaving.
It's not coincidental that big evil FDA/pharma requires people to sign - on paper - their lab notes (and to have lab notes, and to sign that they calibrated the equipment, and that they checked for common mistakes).
And yes, I know this costs a lot, and that this is sort of a pipe dream. And I'm not saying this is the best solution. Renaming genes might be a lot better, because it just allows the field to move on, and Excel will eventually die out, or at least researchers will move away from it - maybe to Jupyter, if we're lucky.
So, all in all, of course Excel is just a symptom, but society is already pretty stubborn when it comes to financing R&D and progress.
That doesn't solve the analysis problem downstream when a non-expert is getting started, doesn't know about Excel's anti-features, and starts doing analysis using the only tool that the world has ever told them is acceptable for tabular data.
Ideally excel would change, but since we know it won't, and we want to work with lots of people with minimal problems, we must adapt.
Do they mean March 1 when they import data from a text file?
It's one thing to change typed in user text in real time. That's not causing any problems. It's another to randomly mutate cells amongst tens of thousands of rows. I don't think that has ever helped anyone.
How are these non-experts getting started? If they are in academia, academia should start teaching these practical things too. (Yes, I know the problem is that many old school bigwig researchers are doing even worse things.)
There are people of all sorts: amateurs that want to play with the data but will not make study of it their primary field, trainees that will make use of tabular data extensively in their career, and then experts in more rarefied fields (e.g. immunology, clinicians, etc.)
If I can get a trained immunologist looking at my data, I'd much rather have 5 more minutes of their analytical skills than teaching them about common data exchange pitfalls.
yeah but then you'll just make them use SAS and nobody wants that. just try convincing anyone who graduated two decades ago to use something reasonable like R
But we didn't accept them as they are. We looked at two options, change excel or change gene names, and picked the one most likely to solve the problem. Gene names mean nothing anyway. We're going from random letters that mean nothing to humans and computers to still random letters that mean nothing but at least don't get confused for dates.
there was a time when a gene was named to facilitate search
through a bookshelf full of laboratory notes.
you could look at a genes lable and know exactly where to find reference to the gene, the lable would indicate the stack number the shelf, the section the volume the page[s] and paragraph....lables such as Sonichedgehog dont do that anymore.
I've been in the room when deciding what to call a gene that the lab had just characterized/identified as interesting for the publication they were working on, and let's just say that making it easy to catalog and find the gene wasn't really on their mind. They'd actually been using a deliberately obfuscated name in conferences to prevent people from scooping their work on the gene before, which is pretty common.
conference is a different game but that doesnt change what happens in the lab, if you look at the overall picture you can see the potential for duplication of alphanumeric strings. this is why in conference lables must be developed as you have a wider context in conference than in your own lab. You can be scooped for research regardless of what lable is given to a gene, and for that matter even if you make publication first as not everybody has the same ethical framework of honesty or due accreditation.
the time before computers were used reflexively was when we would record data, in cursive written in hardcover bound note books. a putative gene has a name that allows it to be found. later if things work nicely that putative is confirmed to be a gene and if things work ideally the locus can be established
Surely then the blame should go one step further back: .csv and tab-separated are poor file formats for *omics data and excel (and pandas, etc.) have no choice but to guess data types and metadata. Unfortunately I'm just not sure there's any format that's sufficiently better to ever replace them.
If someone types MARCH1, DEC1, OCT4, SEPT1 into a column, why should Excel not treat these as dates? Probably 99.999% of users would prefer this auto-understanding. To make users have to then select cell formatting and select date is a terrible UI decision.
It's odd to call such a useful feature a bug.
>Excel is fundamentally not suited to analysing *omics data
But it's excellent for a massive range of basic analysis and especially for a unified platform to share results, which is why everyone uses it. It's good for publication or proposal steps of creating graphs from data output from other more specialized systems, that usually lack the flexibility and widespread use that Excel has.
If my group uses some specialized, often in house, analysis software, my results cannot be spread to others unless I put them into a more common and accessible format. And nothing in the genomics and wider scientific community has anywhere near the widespread availability as Excel.
You might as well complain they also use word docs instead of Latex or troff. Not everyone wants to spend days fighting software instead of doing work.
> If someone types MARCH1, DEC1, OCT4, SEPT1 into a column, why should Excel not treat these as dates?
It should treat these as dates, unless told otherwise. But Excel goes one step further, and throws away the text you originally typed in. That means, if you type "DEC1" into a spreadsheet, see it looks stupid, and change the data type of that cell to "Text", then it's too late, and Excel will show "37226". It has destroyed the original data and replaced it with the numeric equivalent of the date.
The data type of the spreadsheet cells should be a display format only. The text you typed in (or loaded from a CSV file) should be preserved, and the data type of the cell should only dictate how that is treated and displayed, not how it is stored.
Excel does that, largely: cells have a datatype, a value, and a format (among other properties). Changing the format leaves the value unchanged.
However, when you input something into a cell, Excel has to parse it and decide what value and datatype to assign to it. That is just inevitable, and the same everywhere: in Python, `x = 3` assigns an integer, `x = 3.` a floating point, and `x='3'` a string. Similarly hints (or explicit specifications of type) are possible in Excel.
However, given that, what you typed originally is not recoverable anywhere (Python doesn't record whether you wrote `x=3.` or `x = 3.00`, why should it?).
So, to the extent that it is reasonable, Excel does what you request, and to the extent it doesn't do it, it's not reasonable, I'd say: A cell need not maintain the original pre-parsing input.
> The text you typed in (or loaded from a CSV file) should be preserved
That’s even worse, since a user assumes the cell holds what it shows. Excel, for example, auto completes entries, which is extremely useful for data entry, and for accuracy. If it only held the few characters entered, now it has to repeat exactly the same autocompletes every time it opens, and has to do so even if the data that originally caused the auto complete changes. All this is nearly algrithmically I’m-Seville without changing the document into a history. And now CSV is useless for data transfer.
It’s vastly more sane to change to the data the user meant to enter, show it to the user for confirmation, and save that.
That the 99.99% use case bites the remaining 1 in 10,000 is a perfectly reasonable trade off, done throughout society in uncounted ways.
It is shortsighted to call this behavior a "useful feature". It was and is a grave and serious mistake.
Automatically converting SEPT1 to a date caused far more damage than the benefit of the fleeting convenience.
The damage that this "feature" caused and will keep causing goes far beyond the domain of bioinformatics.
Even in bioinformatics things have not been "solved", for the next decade the same gene will have two names when looking it up in previously published data.
> Automatically converting SEPT1 to a date caused far more damage than the benefit of the fleeting convenience.
Who are you to judge? For the novice, even for most users it'll be what they want. And for the expert (or even intermediate user) it is trivial to override.
How exactly would you trivially correct the massive number of published results in genomics data where SEPT1 was converted to a date?
20% of publications had errors introduced by Excel in them! The real number is probably much higher (Nature had a rate of 35%). What is the rationale in saying that the benefits must still be worth it?
See: Gene name errors are widespread in the scientific literature
The root of the problem is not about how Excel displays information, or how Excel parses the data but that the data it tacitly changed upon saving it.
Open a CSV file that has SEPT1, don't take any action, save it right away. Boom, the data has been changed and you cannot recover the original information.
It's one thing if it were typed in, but this is not typically from typing, it's from opening a CSV file that has a field that's OCT4. (And honestly, if I want a date, I'll hit the date formatting button, Excel.)
It is super hard to figure out a way to import data into excel without triggering auto-conversions. I knew a way a few years ago when I was dealing with lots more biologists, but I have forgotten now. If I had to do it now, I would write XLSX directly then manually check the file in Excel, because Excel is fundamentally untrustworthy.
So much this. I work in educational software, and have a little write up about the troubles we’ve had with schools “fixing up” CSVs from their student information systems before sending them to us. The simple act of opening the CSV and hitting save loses data (Microsoft actually fixed some of the problem very recently). That said, they will use Excel, it’s not in my control.
I received dozens of comments “Just use Open Office / Libre Office”. I am but a man, I cannot change the world. I forgive that which I cannot control. We work with tens of thousands of schools. Getting all of their administrators to install Open Office, and thousands of administrative assistants to remember to use it rather than Excel is simply an impossibility.
If the gravity metaphor doesn't reach you, maybe this will:
The world speaks English. It's an illogical and absurdly spelled language. Esperanto is miles ahead on all logical measures. Yet we will never switch, both because the cost of switching is enormous, and because there is no way to coordinate/organize such a move.
> The world speaks English. It's an illogical and absurdly spelled language.
Man, the extent the aviation world has to bend itself to avoid the pitfalls of English....
"to" and "two" sound alike, and so does "for" and "four".
"Turn left t(w)o four zero" or "Climb t(w)o eight thousand"
To avoid that, the lingo is:
"Turn left heading two four zero", "Climb and maintain two eight thousand" (or, "Climb flight level two eight zero").
At any rate, switching to metric would be much better (If I'm 2000 metre high, and I have a glide ratio of 1:10, I can glide 20,000 metre or 20 km. If you're 5000 feet high, and you have a glide ratio of 1:10, how many nautical miles can you glide?), but for the reasons you mentioned, it won't happen anytime soon.
Most of the digits sound distinct enough that they don't get confused- except "nine", which can sound like "five" over a bad radio link, and is therefore pronounced "niner".
Also... Excel is great. What exactly is "ahead" of Excel? People on here really underestimate how great of a product Excel is because of its cultural affiliations.
Excel is great, and generally really useful. But not so much in microbiology. Sure, if you have several dozen genes/proteins with some quantitative values and some conditional formatting to visually distinct certain properties, it's fine.
But in microbiology you usually works with very large datasets that undergo a lot of calculations through a lengthy pipeline. Larger sets and more intense ML approaches even need HPC's. Inserting Excel into this pipeline would be catastrophic.
Oh yeah, I'm totally aware. My day job is designing and building software to construct/maintain exactly those pipelines.
What I'm saying (which is not news to you) is that clearly Excel is excelling along some dimension that people really care about. Even if it is lacking along other dimensions, the goal ought to be figuring out what makes Excel so good, then figuring out how to reconcile that with the needs of e.g. a high scale genomics pipeline.
But to make even an inch of progress against that, we need to get rid of this "haha Excel is for plebes" attitude that many in the software community adopt.
Most computers also have a built in calculator app, but I have never heard of those being difficult to pull from the hands of users, even when replacing it with purpose built software that costs tens of millions of dollars per year.
In the case menctioned in the article, LibreOffice is on par with Excel. Don't believe it? Open up LibreOffice, type "MAR-1" or "SEP-2" in any cell, and look how easy is your data corrupted.
I have meet some other glitches: identifiers in the form of "1E123" get turned into scientific numbers. The column was something like "1A123", "1B123", etc. Those things are sneaky: you can have thousands of rows, and Excel/LO doesn't mind if only the 0.1% matches its rules for smartness. They just change without notice, leaving the others intact.
I'm a bioinformatic, and this kind of stuff is a daily issue. IMO, two problems colide:
1. People with very rudimentary knowledge of computers. I've meet some people way smarter than me, with thousands of papers written, that cannot open a CSV with R with a gun to their head. They can cut you, put three robotic arms in your heart, remove a tumour, sew you and send you home in three days. But R is just too much.
2. People that needs to collect and analyze data, and Excel is the easiest tool they know, but at the same time it's no lame toy: it sorts, it sums, it filters, it makes stats, it graphs... You cannot ask this people to use SQL, specially if it involves foreign keys. They just use Excel for data collection, storage and analysis.
Excel (and LibreOffice) are to blame. This shit could be avoided if they had a "scientific mode" to not be smart with your data, and be the default mode for any file in CSV/TSV mode. Unless you explicitly turn the file into a XLS/ODT or ask "turn this range of cells into dates for me" or "this cells are scientific numbers", they should not change a single dot.
> This shit could be avoided if they had a "scientific mode" to not be smart with your data, and be the default mode for any file in CSV/TSV mode.
There's a whole lot of business use of CSV/TSV with Excel that benefits from the existing defaults. Heck, I've seen federal government websites distributing code lists that are actually in CSV format with .XLS extensions and that are expected to use the default “smart” conversion.
Even if Excel should arguably have had different defaults, the impact on other uses of changing it now would be enormous.
Then we are doomed to the lowest denominator. The bad actor here is the one that expects some software to magically change your data, and he should expect an error or at least a warning or asking for confirmation.
To good actor is people who carefully care and curate their data, just to be corrupted silently by the program.
Lets say then, keep the default but add at least some sort of "respect my data as it is, because this is a CSV with CSV extension" mode.
> Lets say then, keep the default but add at least some sort of "respect my data as it is, because this is a CSV with CSV extension" mode.
CSV doesn't have really data types (or even a hard and fast standard), so that doesn't really work. OTOH, Excel lets you specify data handling by column when importing from untyped text-based formats like CSV, which you should probably do if you care.
Or you should write up a standard for a CSV-like format with explicit semantics around data types (it could even be CSV-compatible so that you get no worse results than normal CSV handling if using software that is aware of CSV but not the new format) and start using it where those defined semantics are useful, and try to get Microsoft to add specific support to Excel for it.
> CSV-like format with explicit semantics around data types
You could encode the column names in row 1 and column types in row 2. JavaScript type names would be convenient for most CRUD work, but SQL type names might be more appropriate for more complicated data.
LibreOffice also explodes (or at least did a few years ago) if you try to open a few hundred mb csv file. Definitely can’t do fast filtering on columns in large files.
Should those things be done in sql? Yes... could this data be sent as SQLite files? Yes... were they endemically sent as csv and nothing could change this? Yes...
Excel is by far the best of “spreadsheet” apps I’ve seen at munging through multi-hundred-mb csv files.
fwiw, i have not tried to use libreoffice for any heavy lifting, unfortunately (fortunately?). i tend to use bash utils for that, and i 100% do not think telling people to "just use grep / awk / perl and csvcut" is by any means a solution.
English as Lingua Franca (Language of Western Europe) is less than century old. My mother studied French, because it was dominant language of that time.
This calls to mind a quote from Cormac McCarthy's Blood Meridian:
It makes no difference what men think of war, said the judge.
War endures. As well ask men what they think of stone. War
was always here. Before man was, war waited for him. The
ultimate trade awaiting its ultimate practitioner. That is
the way it was and will be. That way and not some other way.
Just swap our war for Excel! "It makes no difference what men think of Excel, said the judge. Excel endures. As well ask men what they think of stone."
When I worked in clinical trial design/analysis, no one did their analyses in excel, but for initial data formatting and clean-up? /everything/ went through excel, even if the final cleanup was a python script.
You’re not going to have the same ease of eyeballing your data in SPSS or SAS or R.
I understand that Excel allows for a quick glance, but I'm not sure how R does not offer the same visual cues? Assuming that the data originates from some csv/table structure, Excel requires some GUI clicking and R needs a read.csv() call.
I would say R is a lot more useful for a quick check on whether all columns have one data structure, contain NaN/NA's, are of equal length, etc.
True in every sense.
We shifted from google sheets to O365, there's just no way to go around excel. Its efficient and every one from a novice to expert can find their way around it.
The dataset was growing and sheets were slowing down. Excel installed locally on a machine with good hardware is a pleasure to work with. As the team grew more and more people were asking for office licenses and like i said, we have colleagues who've been using excel for ages and they like to have 2-3 files open on multiple screens and work between files using references. It's just not as fast as using excel locally.
I'd like to add that we also had an issue with our clients using only excel and there were issues with calculations not working, not so often but with enough people making noise our management decided to get the subscription. I've been using sheets for years even as a student when i couldn't afford an office license.
I often read comments on HN starting with “to people asking” or “to the people wondering”... and introducing a new discussion, usually on another similar topic, basically hijacking the whole discussion.
“Newton had a great time for a long time with his description [of gravity], and then at some point it was clear that that description was fraying at the edges, and then Einstein offered a more complete version,”
As for flat spreadsheets, they serve their purpose just like any other tool. I don’t think I heard people complain about spreadsheets for dealing with flat data, the same way no one complains about hammers to deal with nails. If we’re trying to deal with interconnected and enterprise resources we don’t maintain those resources in Excel. We store them in Enterprise Resource Planning software or build relational databases (which are as old as gravity).
Nothing is wrong with spreadsheets, they’re the sweetest way to deal with quick data until we use them for things they can’t do.
Whenever people ask me about getting into the field and if they should learn Python or R or Scala or whatever else, I always respond with you need to get really good with excel/Google Sheets and SQL. Then worry about everything else. I can't tell you how many times I've done an analysis I'm really proud of in something like R Markdown only to have the end user say that's great now can I get it in an excel? I just deliver everything in excel as a default now.
Agreed that spreadsheets aren't great for that, but totally disagree about SQL. SQL is the best cleaning and wrangling data tool out there. There's a reason the trend in the industry is towards tools like DBT. There are certainly things for which you need to use Python or R, but for 95% of tasks some combination of SQL and a spreadsheet will get you where you need to go.
The hyperbole here is amazing. You're saying that a 40 year old computer program that runs exclusively on windows is as un-changable as a _law of nature_.
The english comparison made in a peer comment is more apt, but wow.
The tech industry is so self-centered we think that Scientists should change their fields because of our bugs.
Excel is older than many of us, so yeah, having it around for our whole lives does really make it seem like a law of nature. Sooner or later, all data ends up in Excel. I don't like it at all, but that's what happens.
As for this being an Excel bug: many, perhaps most people don't think it is. That's why it's so permanent: almost everyone likes it.
Also, on the list of dumb things that make me roll my eyes at Microsoft, interpreting the string "MARCH1" as a date and trying to normalize date formats isn't exactly the dumbest.
OK then. Use Excel. But at least know the basics of Excel. Such as simply right clicking the column header and picking a different format so that it will stop auto-interpreting the input.
Nah, I plan on having HDF5 or Apache Arrow be the portable table of data and then build WASM based viewers - it's the excel killer but the war of attrition will be long !
The whole world's data ultimately comes from or ends up in an Excel spreadsheet. Sure, we might use other intermediate data storage methods, but in the end it's going to go into some scientist's or some politician's computer, and by golly it's gonna be in Excel. Trying to rally against Excel is like trying to rally against fundamental forces of nature.
This is just an example of that fundamental force winning.