Hacker News new | past | comments | ask | show | jobs | submit login
C#88: The Original C# (2018) (medium.com/ricomariani)
158 points by cpeterso on April 1, 2019 | hide | past | favorite | 70 comments



Never heard of this. Very interesting. Fun fact: C# was originally called ‘Cool’ internally in MS before the MS marketing took over and rebranded everything Visual X this or X .Net that. Who doesn’t remember Windows Server .Net (later rebranded to 2003), which did not have a single line of CLR code in it apart from the standard runtime?


A lot of this line of work also originated from efforts to solve problems in COM, which was becoming central to Microsoft's approach to platforms. The idea was to make it easier to produce objects in different languages and then let them interoperate at runtime. In that regard, Microsoft's notion of a common runtime environment for objects was a natural follow on to COM's concept of objects abstracted behind interfaces.

There was also a great deal of confusion around the way Microsoft originally introduced COM. When Windows 3.1 came out, one of its marquee features was something called OLE 1.0 - Object Linking and Embedding. This was a composite document approach built on an early technology called DDE, Dynamic Data Exchange. (Which was brittle, and based on Windows messaging.) A few years later, they introduced OLE 2.0, which added a bunch of features, but mainly completely replaced DDE with COM. For a while, there was a lot of conflation of COM as something OLE 2.0 specific.


Wasn't the framework itself originally called COM+? The environment variable tweak knobs still carry the prefix "ComPlus_"


You may be thinking of COR, for "Common Object Runtime", which was formerly known as COM 3.0.

http://www.danielmoth.com/Blog/mscorlibdll.aspx


COM+ is an enhancement over regular distributed COM that was released with Windows 2000.


Some of files open-sourced as part of CoreCLR still have a .cool file extension:

https://github.com/dotnet/coreclr/blob/release/1.0.0/tests/s...


The first book I bought about. Net was from wrox and called "Professional asp plus". First later everything got labeled x.Net


Looks like .NET was called "cool" during early development. "somethinng+" e.g. "asp+" during beta, and ".NET" when released in 1.0


This appears to have mostly the # in common with C#, since this is mostly an extension of C, like C++ or Objective-C etc.


Is this related to C# having this structure where everything is split into hundreds of csproj files and multiple solutions?

And this must be related to needing to have pdb file or being unable to debug.

I wish those two things were not the case. And nuget.

I find C# wonderful except those particular decisions give me pain on a daily basis.


I think splitting into projects with their own separate dependencies is/was actually one of the few advantages to C#, because it lets you replace an entire project and it’s dependencies seamlessly.

This has been extremely handy for us as we’ve been slowly upgrading to .net core. Because we can literally do it one project at a time without even risking breaking anything. Before going to .Net core it made switching from Web Forns to MVC really, really painless because all the business logic lived in its own projects, allowing us to run and maintain the Web Forms project, while building its replacement within the same solution.

I guess it requires you to think about the architecture of your solution, but what stack doesn’t? I say “is/was” an advantage though, because it’s primary use cases are frankly mostly related to the past or really bad practices. It allows you to keep a single project running on some really old .Net version, to utilise something legacy. That’s bad, but sometimes it has real works value, even if it’s bad.

NUGET has been fairly terrible though, and despite its many improvements, frankly continues to be so.


My experience has been the same. It's great when the code base is designed and maintained by people who view assemblies as isolated, fully separable entities. But many programmers walk into dotnet not understanding the approach and make a mess of it.


It's for exactly this reason that I think anything that makes the _technically_ optimal set of choices is actually a bad bet because the probability it is understood well enough to be used in the correct manner is low and it's possible to use it an an incorrect manner. So that's mostly what you see.


I don’t understand that though. I agree that C# certainly could be clearer on the intended purpose of project separation, maybe it could even be stricter, but the default way to build projects in C# is with a high amount of isolation. You can break this isolation, but you have to deviate from the standard. Hell breaking some parts of the project isolation in C# is even quite tricky.

Ironically consolidating your packages cross projects while using NUGET is suggested by standard, but I’ve already shared my views on NUGET being terrible.

Anyway, my point goes something like, if you’re not utilising OOP “correctly” in C#, then why would you be using OOP “correctly” in a language that is even less opinionated about isolation than C#?

I say “correctly” in quotes because I personally like a high amount of isolation in my OOP, but I’m sure not every one does.


Agreed... imho, make the code as discoverable and simple as possible. It's easier to copy/paste or just replace classes as needed rather than make them composable or setting up huge injection profiles.


I've seen it as a mess far more often than something clean that ever gets replaced. I once worked on a project where a relatively simple change meant changing/adding interfaces in over 36 files across 17 projects in 2 solutions. It took a week and a half to thread that noodle through the pile of spaghetti.

My opinion today is that it's easier to have more understandable code than apply "Enterprise" patterns early on. Create classes and test from them instead of deep DI/IoC patterns and interfaces everywhere. In the end, build it like it's throw away code and have high code coverage requirements when in doubt.

I do understand the various patterns, and have worked with them, and more often than not, the path leads to a big mess in practice. I tend to lean away from smarter classes and instead favor POCO + Utility Classes with Single Instances. YMMV of course.


I like the idea of "isolated, fully separable" assemblies, but there's very little explicit advice on this topic and the tools themselves (visual studio and msbuild) are agnostic about that.

I mean, there's nothing to prevent you from spreading out namespaces across multiple assemblies. You can specify namespace and assembly in the "properties" dialog for your project, and also in within the source for each cs file, etc. I've never understood this lack of "guard-rails" in the tooling to prevent these kinds of messes. It seems like that would be reasonable thing to expect?


(a) How you structure larger applications into projects (assemblies) is largely your own problem. If you have a component that's intended to be used from different projects, then it sometimes makes sense that it gets its own project as well. However, you can just as well cross-include the source file if you want. There aren't that many things that explicitly require separate assemblies.

(b) Debug symbols have to be there for you to be able to have source-level debugging, of course. You can always debug the assembly code, of course. For Microsoft's tooling, PDB files are the debug symbols that link the assembly code to the source code. Other tools sometimes embed debugging information in the binaries. And if this information isn't there, you can't have source debugging either.

(c) No one's forcing you to use NuGet, although I wonder what you particular problems are with it.


(a) it's usually a problem you inherit when you join a team/project and you didn't make the choices about it but for some reason there is 200 .sln files and impossible to open the entire project in a single .sln file. Well, I've had that once but my current situation is better than that but still somewhat problematic.

(b) debug symbols could just be included in the compilation unit and optionally stripped out using a flag when you compile. There isn't a need for it to be in a separate file. A lot of fun times attaching DnSpy to something to half-ass debug it through a decompiler because there was no .pdb and you can't debug without it. Just painful. Wish they never split the debug symbols into a separate file.

(c) I guess my colleagues are forcing me to use Nuget as I can't just show up tomorrow and switch everything to Paket. I don't have to use it on my personal projects, which is nice.

Nuget's problems stem from what I'm talking about here. You need Newtonsoft.Json in various different csproj files, so you reference it in both of them. If you're not careful it easily winds up in a situation where you have multiple different versions of .NET in a single .sln file and multiple different versions of Newtonsoft.Json and the dreaded "multiple conflicting versions of a dependent assembly" warning. Etc.

Comparing it to my experience in the Java world I remember it's still possible to split the project up and open separate parts if I want, but the whole thing has one unified version of Java and a unified set of dependencies. Just never wound up with the messy tangles I see in .NET projects.

It's about my only beef with .NET stuff. Really irks me. Paket looks like a step forward though in terms of Package management, but it's not the defacto standard.


Funny thing is, I have the exact opposite problem to you. I have zero issues with packaging or Nuget in .net land. But I (along with my co workers) are regularly stuck in hell in java and scala package land. The tooling for java and scala builds is absolutely shit, whether you’re using maven or sbt. It really takes days to on board a new developer into this ecosystem just to get their build system and ide (IntelliJ) working correctly. I’ve never had to do anything like shading with nuget but we regularly have to do it in java world.

For personal projects I would never ever use java just because the amount of friction around builds and packaging is so terrible.


That seems odd. I'm particuarly fond of the fact that every Java build tool has accepted Maven's artifacts as the way to do it, whether it be sbt, Buildr, Ivy, Leinigen, Gradle etc.

I personally use Maven over sbt for Scala projects though, I much prefer declarative builds to... whatever leaky DSL sbt is rocking.

May I ask why you're routinely shading? I only really use shading for deploying Spark jobs onto a cluster I don't control. Otherwise if I'm deploying an executable jar the assembly plugin and jar-with-dependencies does it fine.

Have you seen Capsule? http://www.capsule.io


> For personal projects I would never ever use java just because the amount of friction around builds and packaging is so terrible.

As a counterpoint, I do tend to use the JVM for personal projects, but I'm also pretty strict about keeping complexity out of the build and deploy process. What that means, practically speaking, is that I compile to uberjars, and deploy as Unix services. I also have a little library that makes it easier to use an embedded in-process database for persistence.

There are surely limitations to this approach, but I don't have enough time for my side projects to get to the point where I ever hit them. So I wind up in a spot where I can generally focus on whatever the small goal of the project is.


Ah Maven. It yes, not my favorite tool.

You're quite right that by default there is no hand holding by Java itself on the matter and if you have to solve it yourself it's quite a not fun problem the first time you come across it.

It bothered me in the early days, but after switching to Spring Boot the problem melted away entirely. If you use https://start.spring.io select what you want, hit download and import as a Maven (or Gradle) project and bobs your uncle. Boot automatically builds a fat jar and it just works. Friction drops to zero in that particular configuration.

I'm not a fan of Maven either. Before leaving Java land I settled on using Gradle and was much happier with that. Some people don't like that it runs a daemon and uses resources or that builds are custom with it, but I don't mind those things at all and I found it a real breath of fresh air compared to Maven.

Another point of difference between native .NET shops and native Java shops that I have found is approach towards development environments. My sample size is small here, but so far I've seen that Java shops run windows, but have development VMs that you can just download an image, drop it into your machine, pull down the repo and the image is pre-configured with all the right stuff to build and develop. Linux natives are heavily into scripting things and the devops usually works quite nice, but I find it also breaks more frequently because it tends to be _somewhat_ brittle.

In .NET shops I haven't seen that. It's mostly been they run on Windows and develop on Windows (duh) but that means you have to manually do a bunch of setup and spend a day installing stuff when you start and everyones machine is a tiny bit different.

I can't say I've found one better than the other, but if it were me I'd run .NET Core development and do it on Linux in VMs.


(b) with a custom build step you can embed the pdb file into the assembly as embedded resource. not recommended though. the best practice I believe, is to supply a symbol package in parallel to a nuget package. _edit: there's also symbol server_

embedding debug symbols into assembly would be sometimes problematic, especially when you have public and private symbols -- signing three versions of the same assembly is not really a good idea.

(a) (c) I can feel you.. But here's what I've done to this problem:

1. use cmake for the source tree. this way you don't have trouble opening the tree as a whole. 2. in visual studio you can have both the cmake view and a solution view -- you can open a solution right from the cmake view or switch to another. 3. in this way, a solution becomes a smaller unit to organize multiple projects together. in case two projects do not have strong coupling, separate them in different solutions, and use a local nuget package source to reference. 4. of course, you'll have to roll some custom cmake script for managing nuget packages. I've done some work on this, here: https://github.com/Microsoft/GraphEngine/blob/master/cmake/F...

I've been using this approach in a few of projects at work(and also OSS ones). Insofar people are happy with it.


Both Java and C# have the same issues with conflicting dependency versions. As for conflicting .NET versions. It _would_ be the same but Java has strived (for better or worse) to stay compatible so you don't often run into that issue in Java. That has nothing to do with Maven though.


C) is easy to fix within visual studio - manage nuget for solution, choose the consolidate option.


> Is this related to C# having this structure where everything is split into hundreds of csproj files and multiple solutions?

I agree that having lots of projects is a pain in the ass, and projects should be used for organizing code into deployment units not for logical separation. But this isn't a feature of the language just a decision(I believe a poor one) a previous developer on your project made.


Projects should be used for logical separation. I think of having domain specific projects as “microservices” that are isolated and where you only interface with the module via well defined interfaces. You get the logical separation of microservices without the needless complications.

If you need to share a module between teams, it’s easy to package it up as a versioned Nuget package. If you need to separate it out into its own http based microservice. It’s easy to put a Controller on top of it and create an auto generated proxy class if you are describing the API with Swagger.

On the client side, have the proxy class implement the interface.

Of course all this is made easier if you are using a DI framework.


Sure but most of the gains for microservices are splitting up large teams into independent teams responsible for sub modules which is much larger scale than most C# applications.


The other benefit of separating functionality into microservices [1], is that it keeps developers from stepping on each other’s toes and merge conflicts. It also lets you scale independently [2].

[1] again I’m not referring to starting out with out of process microservices. They can very well be logically separated within a monolith and using class access modifiers as appropriate.

[2] if you have clean namespace boundaries within your monolith, you program against interfaces instead of classes, and you use a DI framework. Pulling out an assembly/“service” from a monolith becomes a mechanical exercise.


I've worked on massive applications that were split between 3 projects in a single solution as well as the inverse.

It has nothing to do with the language. You can organize your code however you want, but overall it's a far better setup than many other languages that are folder or path based instead.


I don't know about that. I too have worked on .NET projects that were structured in very different ways.

IMO when you see a lot of variance that's a smell. There isn't an obvious or known good way to do something or there is no convention you get a lot of entropy creeping in. I find the folder/path based stuff at least has a convention or two that keep things in roughly the same shape from project to project. I prefer that this is the case than what I see with C# projects.

Anyway, it's just my pet peeve.


There's infinite variance in software applications so I expect proportional variance in how those projects are structured. The convention is to just have a single project until you need to separate. It's not that complicated.

But the point is that it has nothing to do with the language.


The splitting is entirely optional though? You split if you want to produce more binaries. If you keep everything in a single project, you get a single artifact.

Having debug symbols in the binaries also seems like an odd choice to do (at least by default) as they can eassily be 10x the size of the binary.


Symbols can be embedded in the assembly nowadays, though it is not the default option.

https://github.com/dotnet/roslyn/issues/12390


I think that's more due to culture. Java culture, to be precise. The same goes for the dozens of mostly-empty deeply nested directories and other fluff.

You can write C# with far less bloat, and compile it using only the command line, but Visual Studio generates the usual project layout by default.


I think that’s mostly a function of Visual Studio legacy, not C#.


Always wondered why .PDB is such a strange format. Several interlaced streams of data, all going in parallel. Now I know.

By the way, .XBF (XAML Binary Format, used in UWP apps) has a similar skeleton. So the heritage of C#88 lives on.


The interleaved page strategy has lots of benefits...


Fascinating. I worked in the developer tools division when the .net of C# was released and had no idea this was part of it is true.


> The compiler produced hybrid PCode and native x86 assembly, traditional for MS applications at the time.

P-Code? As in UCSD Pascal?


No, a form of P-code that was Microsoft specific.

https://en.wikipedia.org/wiki/Microsoft_P-Code


P-code (portable machine code) as a term is also used for generic virtual machines (JVM, CLR).


Yes, Microsoft basic is very fundamental in understanding the growth of Microsoft. Visual basic was generating pcode and I was told that many office applications (on windows 3.1) were 90% pcode and 10% assembly. The developers were writing pcode directly.


Microsoft's use of pcode in BASIC predates even Visual Basic. I believe the first use of it was in QuickBASIC 4.0 (~87-88), which had an adversing campaign built around the fact it could compile at some ridiculously high rate.

What was happening, however, was that part of the compilation process was built into the editor and run incrementally. When you typed in and completed a line of code, it would be shipped off to the compiler, analyzed for errors, and then represented in the editor. This was easiest to see in the fact that the editor would automatically capitalize keywords, but also visible in the fact it would immediately report certain kinds of errors and make other minor code corrections. Heady stuff in 1988 when running at 4.77MHz. (The same was true for the interactive debugging tools built into the IDE.)

In any event, this was the precursor for similar functionality in later products, ranging from MS BASIC 7.0 to the Visual BASIC series.

BASIC 7.0 was also interesting in that it was at the tail end of Microsoft's BASCOM product line for DOS. For years, they'd offered a BASIC compiler product in parallel with the interpreted products that got most of the press ant attention. The compiler product was what you used if your interpreted BASIC program needed more execution speed or better packaging. It was BASIC 7 where the QuickBASIC IDE officially converged with the full machine code compiler. BASIC 7 included the IDE, a full compiler that could also target OS/2, and an ISAM database library. It was really a precursor to all the uses of VB to build line of business apps, etc.

> I was told that many office applications (on windows 3.1) were 90% pcode and 10% assembly.

The C compiler (at least) had options for compiling C into pCode that would be interpreted at runtime. I've forgotten most of the details other than that it was mainly sold as a code compression scheme of sorts. This was on the observation that pCode was more compact, but slower. (The pCode and the interpreter itself were all bundled into an x86-specific binary file, so there was no pretense of any of the cross-platform aspirations commonly associated with pCode.)


Microsoft was writing BASIC interpreters for early 8 but computers. The original AppleSoft Basic built into Apple II’s in the early 80s was written by MS.


Yup. They were founded on a BASIC for the Altair 8800. pCode as an architectural element of their BASICS, however, was ten or more years after that. (IIRC, Applesoft BASIC's execution model was a doubly linked list of statement structures. My guess is tokenized, but not in the same sense as QB4 compiled to pcode.)


I don’t remember them being a double link list. IIRC, the first two bytes of a tokenized line was the address of the next statement and each command was tokenized.

For speed, if you had a “subroutine” (gosub/return) that was called frequently, it was better to place it at the beginning of the program because it was an O(n) operation to travel the linked list.

It makes me sad thinking about how much more I knew about the underpinnings of my computer architecture and chosen language in the 8th grade than I know now as a working professional. I can still write 65C02 assembly language using an Apple // emulator but I wouldn’t know where to start with x86.


> I don’t remember them being a double link list. IIRC, the first two bytes of a tokenized line was the address of the next statement and each command was tokenized.

You may well be right... it's been a longer time than I care to admit.

> I can still write 65C02 assembly language using an Apple // emulator but I wouldn’t know where to start with x86.

My job out of school was at least partially to write software for an embedded 4MHz 80188... not too different from an original PC. Even in such a limited environment, virtually everything was written in C. This includes things like interrupt handlers directly invoked by the CPU and task switching code in our primitive RTOS. I think the only assembler was a bit of startup code.

> It makes me sad thinking about how much more I knew about the underpinnings of my computer architecture...

I see your point, but my view is that the goal of many of the abstractions we have is to make it possible to shift our focus to higher level and presumably more important concerns. It doesn't always work out that way, of course - sometimes the abstractions get in the way - but I do miss the power of today's languages when I go back to lower level tooling.

One side note to this is that the embedded project above started out running in real mode X86, but by the time I'd left, we had ports for 32-bit protected mode, MC68K, and a version hosted Win32. There were sound technical and commercial reasons for all of this, but it all would have been a lot more costly to achieve in time and money if we'd been coding in assembler. In other words, we arguably gained by not knowing more about the underpinnings of our computer.


I had no idea C#88 was a thing. Thanks for sharing.


In 1988 a Commodore 64 was still a thing. It ran Microsoft Basic. And .Net was slowly being built. Bill Gates has some wild ideas.


I had a C= 64, but didn't know it ran Microsoft Basic. All I saw was "all rights reserved" and "32kb ram ready".


Microsoft wrote the Basic implementations for most of the early consumer microcomputers. Commodore's microcomputers, Apple II, IBM PC, and others all had a Microsoft Basic implementation built into ROM.


apparently you can run the following on a commodore shell for a Microsoft easter egg:

    WAIT 6502,0


Cute, but it would be super annoying these days if a platform vendor picked some otherwise completely legitimate value out of an API's parameter domain and used it for an easter egg...

(Yes, you can wait for a value in any of 65,535 memory locations... but not this one, because we needed an easter egg...)


Unfortunately that command is not supported in [0]. What does it do?

[0] https://virtualconsoles.com/online-emulators/c64/


It seems like it was only in Commodore PET Basic. It would print `MICROSOFT!`

https://www.c64-wiki.com/wiki/Microsoft#Easter_Egg_.28Micros...


As far as I can tell, this has nothing to do with .net. They just re-used the name.


The work they did on C# in the eighties where they tried to come up with a way for programmers to to program (control) Windows, has nothing to do with .Net you mean, not even as a tangent?


Extreme tangent. Microsoft started as a language company and some of the same people were still around.


Note this nothing to do with C# of the .net world except the name.

C#88 inspired thoughts but its code basically died in 88.


> a variant of the C language designed for incremental compilation

This is extremely interesting per se and I wonder why it didn't catch up. Is there anything similar available today? It would be interesting to know more about the original syntax of C#88.


This seems like an April Fool's joke.

The complete lack of any other information is suspicious.


It's real. I was at Microsoft from 1986-1991, working on Windows (2 and 3). We heard frequent stories of this C#, exactly as described, but never got our hands on it.


> I was at Microsoft from 1986-1991, working on Windows (2 and 3)

What was that like? What parts did you work on?

(And thank you... I cut my teeth on Windows 3 in particular and learned a lot, I am sure, from code you helped contribute to.)


https://en.wikipedia.org/wiki/C_Sharp_(programming_language)...

> Microsoft first used the name C# in 1988 for a variant of the C language designed for incremental compilation. That project was not completed but the name lives on.


Interestingly, all sources lead back to Rico Mariani.


Totally not a joke. Also totally not related to C# from .net :-)

Super fun stuff!


I hope not. The article is dated April 25, 2018, which seems way out of band for that.


It is, but it's a double fools joke. That means the fools are the people who think it's a fools joke because, in fact, it isn't.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: