-Academic researchers are already overworked, underpaid, and undertrained. Asking them to spend even more of their time to meticulously upload all their notes and data to an electronic notebook is going to be an uphill battle.
-Academic scientists live or die by their ability to publish. Open data, especially if you're sharing in real time, makes you vulnerable to being scooped by competing researchers. Even disclosures of data after the fact make it easier for others to benefit from work you did with no benefit to the ones who collected the data. Given how cut-throat academics is, you're also not going to get many researchers on board with this idea.
-Interoperability of most laboratory software is poor. People have been trying to get laboratory instrument manufacturers to support open data standards for years with little success. They don't have any financial incentive to allow competitors to have easy access to their data.
good points, and even when software is open source and standards exist, incentives may not be there to use them or build anything to last in the commons, so you get every lab writing its own toolkit and science gateway or data portal to publish and live for a few years then perish when the grad student maintainer moves on
E.g. in ~1900 three different people, Hugo de Vries, Carl Correns, and Erich Tschermak von Seysenegg, discovered the laws of genetics, checked the literature, and found the work of Mendel. Each of them credited Mendel.
The scientific world rewards people in glory and honor much more than money. If you want more money go corporate. If you want to reward people more with money then they’ll pay less attention to the glory but that’s really expensive.
>Interoperability of most laboratory software is poor.
In computerized chromatograhy, we have "standard" .CDF files.
For the handful of us who have actually been computerized for over 40 years, but who were very familiar with pre-computerized performance, a common file format was supposed to be something to look forward to.
We watched through the 1980's as each major instrument vendor brought computerization to the benchtop as a derivative of how they had been using mainframes & minis. Which meant that diverse data file formats became further diverged as they became more advanced while at the same time compromises were made so each would be optimized with a degree of backward compatibility between the limited memory and storage of the mainframes, and the even more limited memory & storage of the emerging pre-PC benchtop units. This was all very high-dollar stuff.
Then the office PC arose and started to get much more cost-effective processing power, memory & storage than any instrument vendor could compare to, so they quickly abandoned the established application-specific hardware/firmware approach. Instruments then began to be designed so that some or all of the most numerically challenging data handling functions were no longer fully possible on that vendor's electronics alone. You would need the vendors' specialzed software in addition running on a DOS and then a Windows PC, using the recommended supported interface such as COM ports or HPIB adapters to connect the cheap PC to the expensive instrument.
The incentive remained for each vendor to continue its own proprietary optimized file format, even if they were now all storing them on FAT32 volumes which had become standard in offices.
Each vendor had its own ecosystem from the beginning.
From their point of view ideally each entire oil company or drug company would have lock-in to that one vendor. If not then otherwise at least on a facility-by-facility basis.
But the problem became obvious more easily to those researchers who had the top model from each of the top vendors. You still could not use one company's software to access a different company's data file, and you could not exchange files between facilities unless they had the same instrument vendor.
This was still before Hewlett-Packard gained a very reliable reputation and became the biggest chromatograph company, after which the whole instrument division was spun off as Agilent.
So it was still considered a level playing field and the Analytical Instrument Association was formed which included the major vendors. The purpose was to define a common nonproprietary file format with all the metadata, etc.
This can't be done overnight but steady progress was made for a few years, HP was a major AIA contributor of very worthwhile personnel & effort.
During those years HP became the biggest manufacturer and one day they quietly lost their incentive for the nonproprietary effort.
Progress rapidly crumbled as the embracement & extension were then leading toward extinction. Momentum could not be maintained but preservation was accomplished as it was dropped in the lap of ASTM where it was quickly approved in the late 1990's without deep understanding or support.
This is one of the best examples of a true standard since it has remained unchanged. Each vendor had already implemented PC support as AIA progress was made, upgrading across versions as eventual finality was anticipated.
And one day it just stopped and has been frozen in time since then. Even though it's an incomplete and unfinished standard, turns out to be so much more ideal than a continuously "upgraded" approach.
CDF does not really stand for a "Chromatography Data File", it is actually shorthand for netCDF which was a well-established storage & communication format for early data-intensive work by the government weather service. Using the freely available netCDF.dll a CDF file can be parsed into a nice extensible text file.
Basically the extensibility of the netCDF layout was utilized & curtailed by the AIA participants, focused on chromatography, and each vendors' software has been able to "export" and "import" a fairly compatible CDF file ever since.
People just don't know much about this kind of thing and it's only been about 25 years, so very many chromatographers are not familiar with it yet.
Lock-in still rules and most vendors won't have this in any of their workflows or standard training.
It'll just have to be something to look forward to like it was 40 years ago.
Now OpenCHROM is a project where so much progress is needed that one professor's lab is not enough. This is what needs to be followed now and contributed to in a modern engineering way until it can be cemented in stone the old fashioned way ASAP.
Probably dealing with enough meta data to capture the stuff like the reaction only works because the supplier of one of the reagents used by that lab had ppm copper impurities