Hacker News new | past | comments | ask | show | jobs | submit login
C# Language Specification 5.0 (microsoft.com)
96 points by kristianp on June 25, 2013 | hide | past | favorite | 93 comments



An interesting change in C# 5.0 involves the handling of foreach iteration variables when used in closure scope. This is technically a breaking change, although it is really hard to think of a case where the old behavior was legitimately desired.

Consider (stolen from the spec doc):

  int[] values = { 7, 9, 13 };
  Action f = null;
  foreach (var value in values)
  {
      if (f == null) f = () => Console.WriteLine("First value: " + value);
  }
  f();
In previous versions of C#, there is a single value variable is in the scope of the entire loop, so the above code would actually print out the last value in the list because that was the value of value (sorry) when the loop finished. In C# 5.0, a value variable is actually in the scope of each individual iteration (so within the braces). So now if you use your iteration variable in a closure scope it won't change on you when the loop iterates and the above code actually prints out the first value.

The same cannot be said for a for loop. It still functions as it always has.


Here is Eric Lippert discussing the pros and cons of making the change.

http://blogs.msdn.com/b/ericlippert/archive/2009/11/12/closi...


Should have been that way from the beginning; having the loop variable have function scope was absurd to begin with and forced developers to copy the loop var to a local before using it in a lambda.


Yeah, they could have made the change in C# 3.0. I just don't think anyone really felt the pain until the Task Parallel Library came along in .NET 4. Since the TPL API is lambda heavy people really started using lambdas for asynchronous tasks, which leads (or used to lead, anyway) to pain and misery when using a loop iteration variable.

It could always be worse, though. Look at JavaScript. Simply copying the loop variable in the body of the loop doesn't work because all variables are scoped at the function level. So your "local" loop variable is actually a single instance for the entire function and you have the same exact problem. Instead, you have to actually create another function and immediately execute it with the loop iteration variable as an argument, and then use the parameter within the function to do whatever you wanted to do. Not cool.


Agreed.


Good nitpicking. But as you indicated, the change of behaviour should only affect people with a very dubious/interesting approach.


No, this bug was encountered often by myself and many of my colleagues. Because I was so familiar with the problem, I could guess that this was a problem in their code after looking at it for a few seconds.

It really was a horrible wart that needed to be fixed.

Now if they would only fix the misleading lying dishonest "downcast on type bound" compiler error messages, then C# would lack warts all together.


The OP wasn't saying, 'this bug isn't a big deal.' He's saying, 'the bug fix shouldn't break existing programs.' Because people, like you, designed around this. Few people, if any, designed their apps to rely on the value in the loop not changing - that's why this 'breaking fix' should break almost no programs.


Yes, you are right, my reading comprehension of OP's comment sucks.


Same here, it bit me quite a few times and was highly irritating each time. Very annoying to choose between copying the loop variable, or passing it off to some other method to gin up an action/func safely.


A note to those that may be confused: C# 5.0 is the language version used by visual studio 2012, which was released in August of last year, alongside the .net framework 4.5. This is an updated revision, although a cursory look didn't reveal any differences to me.

I'm sure a few people were wondering if this was to go alongside Visual Studio 2013, expected to be released in beta later this week.


C# 5.0 adds asynchronous methods [1] and caller info attributes [2] to the language. They also changed respectively fixed the way foreach loop variables are captured in closures [3] and changed the overload resolution algorithm [3] and argument evaluation order [3].

[1] http://msdn.microsoft.com/library/hh191443.aspx

[2] http://msdn.microsoft.com/library/hh534540.aspx

[3] http://msdn.microsoft.com/library/hh678682.aspx


I'm sorry, I meant that I didn't see any differences from the C# 5.0 specification released in August 2012 and the revision posted here, which was updated in June 2013. I am aware of the differences from c#4 to c#5.

My point being that c# 5.0 is around 1 year old, and this spec is not new.


Maybe someone else will still find the information helpful. I also just noticed that the copyright still says 1999-2012 - maybe they just reuploaded it.



Here's another version: https://www.dropbox.com/s/4n6sg31v0v3nfok/CSharp%20Language%...

5.48 MB / Has bookmarks from document headers / For download and offline reading if you want to navigate to specific chapter quickly / Not for online viewing (the parent version is only 3.70 MB and is better if you're bandwidth-capped).


I'm curious - what's the difference between yours and the parent's in terms of the conversion?


This one is slightly bigger (a trade-off) because I included bookmarks (created from headers in the word document).


Asynchronous method... interesting. It seems to me that each version they just incorporate a little bit of some hot paradigms into this language that started as a sweeter(or less painful) java. I remember that during the ruby boom they added the "dynamic" feature into it. Together with linq, and lambda, is it me or all these attempts sometimes appear awkward and out of place?


As a (hardened, old) unix dev, it seems to me you've spent little time with the language. C# is one of the most successfully evolved languages of recent times, no feature was added that didn't have a clear & concise story in the context of all other features.

No Anders & co didn't just add whatever they felt fashionable, they've designed a language over the course of a decade with (to my knowledge) zero warts, bringing to the table more than 20 years of prior design experience (Turbo Pascal, Delphi).

If it weren't for the stigma of being a Microsoft design, C# would be a hugely more popular language.


No language has zero warts. I use it everyday, and love the language, and it's still sometimes painful to do good OO compared to Smalltalk. No meta classes sucks, static methods are a poor substitute, can't override them, can't call them on passed type references without reflection which is ugly. Multiple equivalent syntaxes for anonymous delegates is a wart, they should have just started with the shorter one. Foreach over a container lets you declare the wrong type and throws a runtime cast exception when it could often know at compile time that it's not valid.

Still, great language, love it.


> zero warts

C# is a nice language, but oh man, is that asking for it!

My pet example of a wart is the set of requirements on the iterated collection in foreach loops. Most C# developers probably think that the collection has to implement IEnumerable, but it is not so! I pasted the requirements from the spec at http://pastebin.com/EfsAmz6f

This is also a case where the dynamic type does something different than the static type. If you iterate over a statically typed non-IEnumerable collection, it will work fine (assuming it implements the required methods). But if you then change the type to dynamic, I think you get a runtime error.

So there's a wart for ya!


Collection initializers also do not need to be collections or anything. They just need to be init'ing a type that has IEnumerable and an Add function:

  class Heh : IEnumerable { 
   public void Add(object x) { Console.WriteLine(x); } 
   public IEnumerator GetEnumerator() { return null ; }
  }
  static void Main() { new Heh { "a", 1, DateTime.UtcNow }; } }
Prints out:

  a, 1, 6/25/2013 4:51:38 AM,
It might have been the right engineering approach all things considered. But having these special things that only the language can decide is annoying. The fewer the language primitives, the better.


It actually makes some sense - this is not a language primitive but syntactic sugar for a common task baked into the language. It's the same for using, foreach, events and link query expressions. In all cases you have to map the language construct to methods and properties.

Now you can argue whether it is better to just map to the methods Add<T>(T item), GetEnumerator<T>() and Dispose() or if you require an interface like IInitializable<T>, IEnumerable<T> and IDisposable. On one hand requiring an interface just adds some overhead, on the other hand it makes the intention much more explicit. They came up with different solutions for different cases - using requires the interface, foreach supports both ways and there is no interface for collection initializers.

I don't understand why foreach supports both ways - if I implement GetEnumerator() adding IEnumerable to the interface list seems no unreasonable requirement just like in the IDisposable case. I can understand that there is no interface for collection initializers because this feature was added in C# 3.0 and would have required adding an new interface to all existing collection classes, or changing interfaces or starting with an inconsistent implementation. Would be interesting to hear about the details of the decisions.


I'm arguing it's better to build the duck typing features that foreach and initializers use into the language itself. Every hard-coded keyword and to some extent, feature, is a strike against the language. foreach and stuff like that should be part of the stdlib.


But then you have to make the language and syntax flexible enough to deal with that – while also retaining ease of use, compilation speed (you don't want to go into C++ territory there) and perhaps other factors. All in all it's probably a tradeoff and regardless which way it sways, people are inconvenienced by it.

And arguably there are other languages which can and do fix that gap. Nemerle comes to mind. .NET allows you to mix languages freely so I think switching for parts where it makes sense isn't such a bad idea.


I was aware that something remotely similar is done when rewriting LINQ query expressions but I was not aware of this one. My first thought was that they added it to support dynamic and objects not implementing IEnumerable but as you mentioned exactly this seems to be the case where it would not work. I had a look at the C# 1.0 specification and it's already in there but a lot less verbose.

Any idea why they did this? The only thing this seems to achieve is that you can implement IEnumerable and IEnumerator without explicitly stating it. Are there any (common) classes to which this applies?


One big benefit of using structural (rather than nominal) typing in the foreach construct is that value types don't need to be boxed.


I am not sure about this - why would requiring an interface require boxing?


Interfaces are reference types and so if the GetEnumerator method on a struct is called through an interface then it must be boxed.


I see your point but I am not yet convinced. At least if the interface method is not explicitly implemented, I can call the same method not through the interface. But I don't know the specs well enough to judge if this would be a viable optimization avoiding boxing or if this could cause different behavior, for example if you start handing out the this pointer. Thinking a bit more about it, it probably does. Maybe I will have a closer look at this out of curiosity.


Yes, if the method is not explicitly implemented then you can call it without going through the interface.

But at that point it's just a regular method that might also happen to implement part of an interface that you the programmer are not using (at least at that point).

Likewise in the case of foreach, the compiler is looking for a particular method and does not need to care whether or not that method happens to also implement part of an interface, so there is not point in making the programmer go through the trouble of marking the type as such.


> No Anders & co didn't just add whatever they felt fashionable, they've designed a language over the course of a decade with (to my knowledge) zero warts...

There was definitely one that got me (they made a breaking change to fix it in C# 5.0, though).

http://blogs.msdn.com/b/ericlippert/archive/2009/11/12/closi...

I agree completely, though, that it is remarkable how well-designed the language is.


Array covariance is another one. They added it because Java had it. And now they regret it.


Interesting. Did a Google search which turned up this explanation about why it's considered broken: array covariance means runtime type checks rather than compile-time checks, which are more expensive and could hide bugs that could be caught when compiling. (http://blogs.msdn.com/b/ericlippert/archive/2007/10/17/covar...)


I felt it compromised C++'s low level compatibility in exchange for increasingly verbose expressions. Perhaps C# has nothing to offer except for Microsoft's ecosystem. Valid in 2005 but not today.


I'm a die-hard C++ guy but years ago I wrote a few 10k's of C# and found it quite productive. Here's what I liked about it:

* Garbage collection

* Lightning fast compilation

* Pointer-safe subset good enough for most common uses.

LINQ is also considered a huge win by many, though I suspect it could also be done in C++.


Now if they just used the same compiler backend across their C++ and C# toolchains.


Well there is the managed C++ backend, if you're into that sort of thing. I've been known to use it once or twice. :-)


I did use it a few times, back when .NET was still beta. So I went through Managed C++ and C++/CLI transition.

I also find it nice that with C++/CX VC++ is offering what C++ Builder does for years in terms of C++ RAD development.

Nowadays most of the consulting projects I work on are in JVM and .NET land.

What I really wanted to say it that I would like to see NGEN and .NET JIT offer the same type of optimizations as VC++ does, like SIMD, auto-vectorization and such.

When targeting Windows Phone 8, .NET gets compiled to native code with such type of optimizations using a cloud based AOT compiler. Maybe those improvements will eventually be brought into the standard framework.


In C# you need to go pretty far out of your way to implement an exploitable codepath. (It's not _hard_, and I even found one due to the JIT.) But for almost all apps, the better security (and tooling), better turnaround, and, without opening an escape catch, verifiable type safety - that's pretty fantastic.


Can you please elaborate on the exploitable codepath you found that was due to the JIT? That sounds interesting!


This was in a early version of .NET, over a decade ago. Basically, certain calls in the CLR are marked in a special way so they can be done by internal CLR code instead of running managed code like any other function. I think Array.Copy is one example.

If you created a delegate (function pointer) to such a special function that hadn't been called yet, it'd point to the wrong place and let you do things that shouldn't be possible.

The neat thing is that the code was valid and verifiable (so it'd run in low-trust), but you'd still end up owning a function call to the wrong point of memory due to a runtime bug.

Like practically every CLR vuln, it's the loading platform that is affected, not the applications. I think it's only important when relying on the complicated sandboxing restrictions (CAS).

I'd like to publish a working example, but haven't had the time to figure out which old OS allows the specific old .NET version to replicate the behaviour. I only found it by accident and was more concerned about finding a workaround to not crash.


Did you report it and did it get fixed?


Yeah I had some friends and did an internal report. It's long fixed :)


Awesome, thank you.


I disagree with the "Microsoft platform" bit. The Mono/Xamarin guys have been coming up with a very nice competing/complementary platform. I've built personal and production projects running on Linux using Mono.

While I haven't used, the Xamarin iOS/Android development platforms look quite interesting. I suppose it has the stigma of a somewhat pricy per seat license but its likely worth it if you want to go with a shared codebase for iOS/Android/WP/Win8.

There's also MonoGame[1] which targets developers of MS's former XNA game dev kit.

[1]http://www.monogame.net/about


As a former C# dev, here's my subjective view:

The stuff actually fits in the language pretty well. What has resulted, however, is that there are multiple ways to skin a cat in C#, and there's very limited guidance on what's considered idiomatic.

This is complicated by the fact that many .NET/C# shops have coding standards in place that don't go much beyond C# 2.0. The last C# interview I did, at an allegedly "progressive" .NET shop, the interviewer barred me from using null coalesce in a whiteboard example because "That's hobby project code. It's not how we write production code"

Great language, terrible culture.


Yes, the whole fights of people confused about "var" and avoiding it shows that a fair amount of MS's target group is simply not interested in proper language research or design. I'll admit that I love the ideas behind the CLR (the VM design is pretty slick, although it's stagnated the past years). I think some parts are really excellent (the generics), but when I approach a vendor and I hear they're using .NET, it is a slightly negative flag.

I too had a boss (new owner after acquisition) say "no you're not allowed to use lambdas".


If you want to start a flamewar just go to a C++ newsgroup and ask around about using type inference with auto.

This is typical enterprise stuff, I bet most typical enterprise already making the switch to Scala and F# also force their developers to type annotate everything.


> What has resulted, however, is that there are multiple ways to skin a cat in C#, and there's very limited guidance on what's considered idiomatic.

I find C# to be much more idiomatic than C++ or Scala. If you think C# is bad...stay away from those languages.


That is just a symptom of the typical enterprise culture.

The consulting company I work for still gets requests for project in Java 1.4, for example.

My last .NET project we had some issues because there was a mix of .NET 3.5 and 4.0 across teams.


Way to go extrapolating crass generalisations from one dud interview. Every single dev team I have come across thinks they are the duck's nuts, and the team leads even more so, resulting in embarrassing overconfidence in their own opinions as per your anecdote. I noticed this over the course of a few decades spanning many languages/platforms. I recall my very first graduate programmer job interview in 1989, where I was asked by the interviewer if I defragged my hard disk weekly, and I said no I do it once in a while, and he couldn't let it go and wanted to argue about how important it is to have a regular defragging schedule, wtf, anyway I didn't get the job.


I program in a lot languages and C# is heads and shoulders above everything. It doesn't feel hacked together at all. It's like someone with a real vision put a lot of work into making a great language. Too bad project requirements usually dictate the language I have to use.


One of them is Eric Lippert. If you use C# you'll enjoy his blog, http://ericlippert.com/.


I personally think they are doing a great job evolving the language. For every version they focus on one or at most a few features and those are really well designed. And I don't think they are following the latest trend - they are adding the things they know how to do [1] and which provide the most value.

As far as I know dynamic was added to ease interop with native code - I have never seen it heavily used in regular code. LINQ and lambda expression have been introduced together with the Entity Framework and I am unable to imagine using an O/R mapper without something similar again - it just fits together so nicely.

Async was an obvious next choice - we are getting more and more cores and traditional thread-based parallel programming just does not scale because it is so error prone. And after they extended LINQ to PLINQ they had a good part of the necessary infrastructure already in place.

[1] For example they would like to add something like Java's checked exceptions but when they considered it they came to the conclusion that there are to many unsolved problems and that it might require a few more years of research to come up with a really good way to do it and so they postponed it for later reconsideration. In general there is a lot of well researched theory behind the things they do. Another example is the Entity Framework. They didn't just implement another O/R mapper but came up with a mathematical theory comparable to the relational algebra and then implemented it based on that results.


Yeah, C#'s async is a weak version of F#'s workflows, but with a single hard-coded implementation. Research (Haskell) and other languages filter into F# (and team), which filters into C#. F# has had async workflows since 2007.

LINQ's another example of a place where the C# team sighted one end advantage (queries) and implemented the minimum to make it work. That's why all the features around the main LINQ part are half-assed (limited expression trees, ambiguous lambda syntax, very limited type inference.)

Overall, C#'s a nice enough language for a lot of projects. The tooling really helps sells it, too. It's just rather verbose and limiting, unnecessarily. I just can't think of any place where C# the language is significantly better than F#.

But, if you're ranking it against the likes of Java, then yes, C# is close to perfect.

Edit: C# implements a lot of stuff right into the compiler, things that'd be useful more generically. For instance, foreach and collection initializers use duck typing that's baked into the C# compiler. That's a generally useful feature, but again, hard coded for a specific scenario. F# has some of that too, but not to the same extent (F# warts arise mainly when you push the edge of F#-C#ish interop).


Why would you need workflows when you have LINQ. It's full fledged monadic comprehension and you can use it with Async beautifully.


Yes, but C# is a better fit for the enterprise world than F#.

Currently my F# code is mainly for scripting when doing .NET projects, but only if I am the only user of such scripts, otherwise Powershell.

Anyway I really appreciate the work Microsoft puts on F#, Haskell and OCaml via their research arm.


Why would C# inherently be a better fit for the enterprise world?


Because most Fortune 500 employees by definition are code monkeys that won't touch FP concepts, just doing the typical CRUD applications full of design patterns.

Having FP concepts sneak in C# is easier to get them introduced in this world, than forcing this type of developers to use something like F#.

This why Erik Meijer decided the best way to bring FP to the enterprise was via Visual Basic.

http://research.microsoft.com/en-us/um/people/emeijer/papers...


C# is an absolute joy to use. The three features you've touched upon in your post are three of my absolute favorites (another being type inference). C# pulls concepts from a variety of other languages and it feels really well structured.

I'm sure I'm not the only one who misses these features when they're working in Java.


I'd be hard pressed to find another language that I'd want to use for my server side development work. It's hands down the most expressive c style language in existence. Coupled with Visual Studio, the amount of quality software that can be written in a short amount of time is nothing short of amazing. (This is coming from someone who was a Rails dev in their previous life)


Can you explain what you find more expressive in c# compared to ruby?

I've been a c# developer for over 5 years and I have to disagree with almost everything you said. Ruby blew my mind when I learned it.


He did say c style language. I went from Ruby to Java. I have found it very hard to accept the Rails maintenance story since then.


I am sorry but the amount of time taken to install visual studio is also not short of amazement.


I'm just echoing what everyone else here said, but they do a great job at adding these features to the language. I believe that if you learned C# as a first language nothing would feel out of place.


I wish we had an accurate spec for Ruby. RubySpec project is great, but we still let the language be defined in terms of MRI implementation.


Sun started the trend when Guy Steele and Gilad Bracha (among others) wrote the initial Java Language Specification. Microsoft has a strong team of PL researchers, including Mads Torgersen and many people at MSRC (who published a paper on formalizing Async in ECOOP 2012), to do this work, which I think is very useful.


MS Word or Word Viewer to see the specification? Really?

Good to see nothing ever really changes in Redmond.


Is there something in this document that prevents you from reading it in your preferred viewer? The extension suggests it's just an Office Open XML doc. I can read it in QuickOffice on my Android phone.


Given that my preferred viewer for this kind of thing is Okular, with a fallback to Firefox+pdf.js if necessary, it's already struck out twice.

My Android doesn't have Quick Office, it does have ThinkOffice but I can think of no reason I would a) read a computer language spec on my cell phone, or b) ever use ThinkOffice on purpose.


I think you've missed my point there. I wasn't suggesting you read it on your phone, I was demonstrating that you don't need Word using what was handy to me at the time (QuickOffice is owned by Google and was preinstalled). It's not a Word document. It's an office open XML document which dozens of free and commercial word processors and web apps can read and write. Not using your choice of previously-proprietary document formats doesn't make this an "oh, Redmond" moment.


Office "Open" XML is a proprietary format that got rubber-stamped as a standard to convey a sense of legitimacy. Significant parts of that standard say "do it like Word does" with varying degrees of indirection. Other office suites and viewers support it because they have to, not because the "standard" makes it any easier to do so.

Comments like this one suggest that the rubber stamp does indeed convey some legitimacy, which I'd consider unwarranted.

As a specification for a language coming out of Microsoft, it isn't unexpected to see it in Word format, and it's certainly possible to cope with it using any number of other tools without resorting to Word, but that doesn't make it a good idea. Use PDFs.


> Significant parts of that standard say "do it like Word does" with varying degrees of indirection

That's not correct. What it actually does is reserve some markup for use by third parties that have reverse engineered various old programs (including programs that competed with Microsoft programs), so that if those people have workflows that depend on features of those old programs that cannot be represented in OOXML, they can still use OOXML as a storage format but add in the extra information they need.

Here's the use case this is aimed at. Suppose I run, say, a law office, and we've got an internal document management system that does things like index and cross reference documents, manage citation lists, and stuff like that. The workflow is based on WordPerfect format (WordPerfect was for a long time the de facto standard for lawyers).

Now suppose I want to start moving to a newer format for storage. Say I pick ODF, and start using that for new documents, and make my tools understand it. I'd like to convert my existing WordPerfect documents to ODF. However, there are things in WordPerfect that cannot be reproduced exactly in ODF, and this is a problem. If my tools need to figure out what page something is on, in order to generate a proper citation to that thing, and I've lost some formatting information converting to ODF, I may not get the right cite.

So what am I going to do? I'm going to add some extra, proprietary markup of my own to ODF that lets me include my reverse engineered WordPerfect knowledge when I convert my old documents to ODF, and my new tools will be modified to understand this. Now my ODF workflow can generate correct cites for old documents. Note that LibreOffice won't understand my additional markup, and will presumably lose it if I edit a document, but that's OK. The old documents I converted should be read-only.

Of course, I'm not the only person doing this. Suppose you also run a law office, with a WordPerfect work flow, and are converting to an ODF work flow. You are likely going to add some proprietary markup, just like I did. We'll both end up embedding the same WordPerfect information in our converted legacy documents, but we'll probably pick different markup for it. It would be nice if we could get together, make a list of things we've reverse engineered, and agree to use the same markup when embedding that stuff in ODF.

And that's essentially what they did in OOXML. They realized there would be people like us with our law offices, who have reverse engineered legacy data, that will be extending the markup. So they made a list of a bunch of things from assorted past proprietary programs that were likely to have been reverse engineered by various third parties, and reserved some markup for each.


Be happy it's not XPS.


I recall years ago reading that the reason Microsoft releases their documentation in Office format (other than the whole eat-your-own-dog-food thing) is so that they have the ability to cryptographically sign the documents.


Fair enough, but PDF can do that as well.


But don't you need Adobe Acrobat to sign it? I won't fault them for using their own tool.


If we were talking about almost anything else from MS I'd probably agree with you. But this is the specification to a computer programming language, a language that MS has tried over and over to assuage the fears of open-source developers about. "Oh, it's standardized by ISO and ECMA!". "Look, Mono is proof it can be implemented cross-platform!", etc. And then they do stuff like this.

On the other hand look at the excellent work done by Microsoft Research (and then look at what format that final work is usually published in).


As a regular reader of Microsoft Research papers, I can assure there are quite a few of them published in Word and PowerPoint formats.


Quite a few of them are also LaTeX. Generally it's probably the same distribution of formats you see in academia as well. And often the publishing journal dictates what to use.


what on earth would you expect?


PDF or Postscript. Word is not a format that you should share finalized document. Why does anyone need to edit a spec?

If there is a good reason to edit, perhaps they should have made it an HTML document. Use LaTeX. Are they afraid the Office division will get them fired if they use anything other than Microsoft products?


> PDF or Postscript. Word is not a format that you should share finalized document. Why does anyone need to edit a spec?

PDF, sure. Postscript, uh, maybe missing the target audience a bit there.

> If there is a good reason to edit, perhaps they should have made it an HTML document. Use LaTeX. Are they afraid the Office division will get them fired if they use anything other than Microsoft products?

hahahahaha. Perhaps they think it's a good wordprocessing product which can be used by many different roles in the business?

And yes, there's a lot to be said for a company using its own products where they fit.


Maybe use epub then if you don't need the vector processing facilities of pdf. I find a mostly-text book will be 20% the size as an epub.


Wouldn't XPS be the best equivalent to PDF ?


I think that's the near-equivalent but I don't think that even Microsoft is trying to dogfood that anymore.


Anyone knows if C# is used inside microsoft to buiild desktop app ( office ? Visual studio ? ) I remember it wasn't the case for a long time but stopped watching since.


The Visual Studio IDE and Expression Blend are (or at least were) WPF... I guess there's a pretty good chance it was C# rather than VB.NET.

Not sure if there is anything else though.


Visual Studio since 2002 (.Net 1.0) has been partly written in C#/WinForms and then C#/WPF. Visual Interdev (from which the current Visual Studio lineage descends) was partly written in VJ++. It was these bits that were ported to C#/COOL for the 2002 release.


Isn't part (majority) of their Singularity OS written in C#? I've been waiting to see if they were ever going to release the realtime GC as part of that project to the C# community at large.


Why, if this is version 5.0, does the document it downloads say version 4.0? Am I the only one seeing this? The details also mention previous version 3.0 on the webpage.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: