While we have a common ancestor in the original UNIX, so much of illumos is really more from our SVR4 heritage -- but then also so much of that has been substantially reworked since then anyway.
More seriously, it does seem like there were a number of interesting systems research and development collaborations in the 1980s: BSD at Berkeley, Athena at MIT, Andrew at CMU, etc.
Currently it seems like the interest, funding, opportunities, and incentives for academic researchers are largely for short-term projects and AI/ML rather than long-term, ongoing systems projects. The modern funding and publishing landscape seems to emphasize speed and quantity over quality and impact.
Moreover, it seems that companies with deep pockets (Microsoft, Apple, Nvidia) may be less likely to collaborate with and/or fund academic projects as IBM and DEC did in the 1980s. It could be that those partnerships weren't hugely beneficial for AT&T, IBM and DEC's businesses.
Hello! I am Head of Engineering at Sourcegraph. I'd love to get feedback on which SCIP indexers you've had issues with, and, if you have the time, feedback on what sort of problems you've had with them. Thank you so much!
Hey guys, it's been over two months since I've been in the weeds with SCIP so I'm not going to be able to write very detailed issues, most of my experiences were with scip python and some in typescript.
1. roles incorrectly assigned to symbol occurences
2. symbols missing - this is a big one. I've seen many instances of symbols being included in "relationships" array that were not included in "symbols" array for the document, and vice versa. Plus "definition" occurrences have been inconsistent/confusing - only some symbols have those, and they don't always match where the thing is actually defined (file/position), and sometimes a definition occurrence has no counterpart in symbols array
3. the treatment of external packages have been inconsistent, they sometimes get picked up as internal definitions and sometimes not
I think SCIP is a great idea and I'd explore using it again if it got better. But I see that there are issues staying in the backlog for 6+ months which makes it seem from the outside like Sourcegraph is not prioritizing further development of scip
I don't think this is a fair comparison. If you want to teach that you can write(2) to raw FDs in Rust, you can, just like you can use write(2) or fprintf(3) in C.
C has a standard library which students should understand even though it's making system calls deep down. Rust has a standard library which students should understand even though it's making system calls deep down (in fact, sometimes through the host C library).
I certainly see the value in knowing C and Unix and that was my education over two decades ago as well. But I also watched many people quit computer science altogether because they struggled with heisenbugs with C pointers. If they could have been kept on track by Rust compiler errors and higher level abstractions, maybe they would still be in the industry today, learning whatever else they needed instead of quitting in their first semester.
Is going from high level to low level somehow worse?
I went from very high level (C# web and even webassembly) to C
and while I believe I learned a lot and my understanding of computers improved,
then I think the biggest lesson is that one of the most important programming ecosystems (C) is a very messy and painful.
Not because it must be painful, but because of decisions made decades ago, maybe some inertia, maybe backward compatibility, maybe culture, who knows?
Low quality compiler messages, ecosystem fragmentation, terrible standard lib (where are my basic datastructures), memory management being minefield, etc.
C gets a bad wrap because there are now alternatives built by finding solutions to problems we only know because of Cs existence. Compiler messages, standard library and memory management are all things we can agree are terrible now days but when C came out it was a huge improvement over the norms before. Also it’s important to remember even “big” things like Unix were at one point just a few thousand lines of code.
After being a web developer for 10+ years, I'm getting into C for the first time. I'd had a bit of experience with Objective-C years ago when I did some iOS work, but that was the "lowest" I'd gone down the stack.
There's a lot of unfamiliar territory, but I'm really enjoying it. When it's complex, it feels like it's just inherently complex. Which is a breath of fresh air for a web developer. I'd gotten so sick of the bullshit complexity that comes along with the high-level work; programming feels fun again.
What happens when rest of the team does not rise to the occasion? You now have a happy but very mediocre team for the task on hand. Decision making is very democratic but seldom happens in time.
I feel you need pace setters but not excessively reward individual heroics.
A very mediocre team that does not rise to the occasion (aka fails at delivering) probably will look bad for management and will be sidesteps (aka outsourced) if not fully replaced.
Well, those OS differences taught us to write (or strive to) portable code. It also taught me to better appreciate strengths in different operating systems over the years.
Samba for file sharing? I led development efforts for the first Samba release on VMS (IA64) working at HP. It was not performant enough compared to ASV but we were on the right path.
I know NetApp (stack based on FreeBSD) contributed significantly to Bhyve when they were exploring options to virtualize Data ONTAP (C mode)
https://forums.freebsd.org/threads/bhyve-the-freebsd-hypervi...