To head off some of the misconceptions here, this is not about garbage collection in general but Objective-C built-in garbage collection.
Apple NEEDs to do this for their own sanity. I'm surprised they waited this long actually.
It's a serious PITA to support GC and non GC at the same time in a framework. Having to add finalizer methods instead of relaying on dealloc methods and dealing with non-deterministic object lifetime as well as deterministic was a huge problem.
ARC doesn't have these problems because it wraps traditional manual reference counting (to pick nits, it actually bypasses it when it can safely but this is an optimization detail that you shouldn't need to worry about).
To Apple and 3rd party framework developers, it's painful to support GC and non-GC paths in your code at the sametime. It's not really possible to use a non-GC compatible framework in a GC app.
It's also impossible to use ARC and still support the GC as well in the same framework. Apple has been completely prevented from using ARC in it's own frameworks because of keeping GC compatible apps working, and making this change will allow them to start ARC-ifying their own code.
It also means that a significant chunk of the ObjC runtime can be simplified.
There are also some great advancements in the ObjC runtime that Apple is doing (like tagged pointers and shadow objects) that the reference boehm garbage collector can't really deal with and was all completely disabled in the ObjC runtime when you had a GC app.
This is a good thing. Not many app actually shipped with the ObjC garbage collector and ran well.
Now you are free to use your GC for your own needs in your app. That has nothing to do with ObjC. Feel free to add Java or Ruby or whatever language you like to your own app and submit that to the Mac App Store.
Question is, are they removing support from the OS frameworks themselves or just Mac App Store? There are many applications that still use GC, many of which are not in the store. Does that mean they cut support for these in a future 10.10 update or 10.11? I doubt it.
I'd bet on 10.11 or 10.12. ig they don't eventually remove it, there's little point in deprecating it or forcing it off the App Store as they can not reap the benefits in their own code.
At the very bottom of the documentation page linked from this page [1], it says it will be removed. Sounds pretty definitive to me.
Garbage collection is deprecated in OS X Mountain Lion v10.8, and will be removed in a future version of OS X. Automatic Reference Counting is the recommended replacement technology.
That seems a really good argument for killing manual memory management, not for killing the GC option. As a user I want dev time to be spent adding needed features, not chasing down memory cycles (which _will_ happen when you add something like blocks to a language without garbage collection).
As a programmer I don't want to litter my code with disgusting crap that only serves to tell the compiler something it should be able to infer anyway.
My suspection is that Apple is just doing it because they don't want to put the necessary RAM in their phones (Android has something like twice the amount of memory Apple has at this point).
That is what ARC is for which is superior to the GC which never really worked properly anyways. ARC is deterministic. The only problem is that doesn't automatically handle retain cycles.
ARC is compatible with manual reference counting entirely, but internally, when ARC detects the classes in a particular objects hierarchy are not doing anything fancy with MRC, it's super clever and bypasses it for a bit of a speed boost. It means that the developer doesn't have to do anything extra to be compatible with ARC from MRC or the other way around.
>The only problem is that doesn't automatically handle retain cycles.
That would be like saying the only problem with the Titanic is that it had a few extra holes in it.
Of course what Apple really should do is dump its old tech and more to something new and more modern. I hate to say it but when even Java is more modern and can do stuff your system can't, it is time to upgrade (preferably to something other than Java).
What Apple calls "automatic reference counting" is what is usually known as just "reference counting." They call it "automatic" to distinguish it from the previous, even cruder, system where counts were manipulated directly by the programmer. Obviously this is an improvement, but personally I am a supporter of GC. It is a myth that reference counting is not subject to arbitrary pauses: http://www.hboehm.info/gc/myths.ps
You can, although I haven't seen systems that do this, enqueue objects to be deallocated then have either the main thread every once in a while or a background thread periodically pop objects off of the queue and do the normal reference counting dance.
Yup, the problem with this is that it renders the other claimed advantage of reference counting moot (viz. there is now little or no temporal linkage between when the object's ref count hits 0 and when its destructor/finalizer is called).
The other problem with reference counting, which is I believe insoluble, is that it is inherently O(alldata); that is to say you must perform an operation each time a piece of memory dies, at the very least. This makes it good for collecting older generations in a generational GC (where O(alldata) ~= O(livedata)) but bad for the nursery (where the generational assumption means that O(livedata) << O(alldata)).
Any computer processor can do a finite number of operations per time period, and an (effectively) arbitrary number of objects can be marked for deallocation simultaneously. So yes, you can have arbitrary delays.
But this is always the case. Refcounting, GC, manual memory management, etc. But I'd say you've gone one step too far in saying that there is little or no temporal linkage with refcounting. If you're using a queue, objects will be cleaned up as soon as they can be while remaining inside the time limitations you've allowed.
If you want to get fancy, you can even have the compiler rearrange objects such that pointers that are likely to point to objects with destructors are first in line.
And yes, refcounting is O(alldata). But that's ignoring that freeing is relatively fast, and that you can easily implement bunches and bunches of optimizations to prevent refcounting from having to work at all.(For example: if you acquire a reference to an object on the same thread and then release it, you don't have to do anything. If you create an object, but at runtime you can determine that you never released the reference to anything, you can free it immediately. If you acquire multiple references to an object, you only need to check for references reaching zero once. Etc. All of these are things that theoretically can be done with a standard GC, but good luck actually doing so.)
Reference counting is GC - or rather it solves the same problem that GC solves. I don't see why people don't count it as one. Sure, basic reference counting cannot handle cycles - but there are extensions that can.
Sure it is GC, the very first chapter in most CS books about automatic memory management starts with reference counting and then proceed to more elaborate algorithms.
It is usually presented as a quick solution for when the resources for doing more powerful GC algorithms aren't available, either in computing power or engineering.
Yes, with reference counting you must perform operations per allocation. But you already need to do this. Malloc / free, even arena-based solutions, all do this.
Objective-C is a superset of C. If you think about what it means to have a garbage-collected version of C for just a few minutes it should become immediately clear why abandoning GC for Objective-C is necessary and appropriate. The GC can't always reliably detect a reference (pointer) so it has to be conservative. It also can't compact the heap, which means you don't get the nice "allocation is just advancing a pointer" behavior that makes allocations nearly instantaneous in managed languages.
ARC is essentially just the old rules for manual reference counting (MRC), but the compiler analyzes your code and inserts the retain/release/autorelease calls automatically. This gives you similar deterministic behavior as MRC but while writing the code it feels like a garbage collected language because you mostly ignore management of memory and everything "just works".
The downside to ARC is the inability to detect retain cycles and a slight performance hit compared to MRC in some scenarios. The performance is actually pretty good... What ARC loses compared to hand-coding it gains by using special runtime functions (not slower ObjC message sends) that elide unnecessary retain/release calls in many situations.
objc_retainAutoreleaseReturnValue is inserted in a caller and callee (typically a property getter). It examines the return address on the stack to detect if the caller is going to invoke objc_retainAutoreleaseReturnValue on the returned value; if so it just does a retain and sets a flag the caller uses to skip doing retain+autorelease, leaving the caller's release only. Instead of the object staying alive until the next run loop iteration and getting autoreleased it can safely be released immediately, plus at least one extra pair of function calls can be eliminated.
In C++, shared_ptr and similar "smart pointer" concepts are very similar to ARC. The key difference is it's a library feature, built atop destructors and copy-constructors, and not a language change like ARC.
Microsoft's C++/CX introduced in 2011 is also very similar; the hat syntax[1] is to COM's AddRef/Release as ARC is to -retain/-release. In both schemes, what used to be manually refcounted objects are now managed by the compiler. I don't recommend C++/CX though, it is confusing to C++ devs. Smart pointers are a better recognized idiom.
[1] Unrelated to the hat syntax from MS's previous attempts, Managed C++ and C++/CLI, which are GC and not refcounting as in C++/CX.
The most important difference on OS X is that ARC is compatible with retain/release, but GC requires explicit support by every framework linked in your app. Some frameworks never added GC compatibility.
In the general case, it is estimated that GC uses on average twice as much memory (high water mark), but direct comparisons are of course very application specific. GC also has the dreaded collection pauses which can make performance unreliable. (Eg stuttering animations)
Biggest drawback of ARC is that it doesn't detect cycles, so the developer needs to have a global understanding of an app to avoid hard to debug leaks.
(Edit: Perl is garbage collected since version 6, and Visual Basic is garbage collected since version 7. I am not aware of any language besides Objective C moving in the opposite direction.)
CPython is actually hybrid reference counted and garbage collected. This is really an implementation detail for historical reasons, no other Python implementation I'm aware of uses reference counting.
There is no Visual Basic 7. Visual Basic .NET is garbage collected as it runs on the CLR, but is quite a different language to VB6.
C++ and Rust both provide reference counting pointer-like-things (shared_ptr and Rc respectively). C++ has been around for quite a while, of course, but Rust is pretty new, and deliberately eschewed garbage collection.
How can you guarantee that the component library that your company just bought, available only in binary form isn't doing something like this?
void cool_stuff_with_widget(const std::shared_ptr<UI> ptr)
{
UI* evilPtr = const_cast<UI*>(ptr->get());
// now store evilPtr somewhere else and try to use it any time the library feels like it
}
No one is going through disassembly to check such behaviors.
C++ RC is a very welcoming addition to the language, but it only works if everyone plays by the rules, 100% of the time.
It's not really an issue, if the library is buggy and leaks or keeps reference to things it's not supposed to, it will be the same whether it's RC or just plain memory management.
It's just better documented and easier to spot the bugs if you're doing RC.
It not about bugs, rather C++'s lack of support to enforce that RC is not misused and programmers cannot grab RC internals and corrupt the data structures.
C++ is a language that allows any kind of memory manipulation, if you really want (or need) to do it. So your example doesn't prove anything about the adequacy of shared_ptr in particular.
My point was that if a language does RC via library types that exposure the pointer that they own, there is no safety guarantee about memory safety.
It is impossible to prove that all memory access to a given address are done via the RC wrapper object, specially in the presence of third party libraries in binary form.
Sadly due to its C compatibility, there is no way around this in C++.
This is mostly a problem in large projects with various skill levels across team members, having high attrition levels.
As I mentioned above, the goal of C++ is not to provide memory safety guarantees. What it provides is a mechanism for RC implementation. You're free to use or misuse it if you want. It will work only for those objects that you think should be managed with RC, nothing more or less.
In my opinion, if you understand how memory management works, ARC is definitely superior to GC. There are things it's not very good at, typically things that usual reference counting is not very good at either, such as handling cyclical data structures.
Resource management is a necessary evil but usually not a primary concern of most applications. Therefore in an ideal world it should be abstracted away and you should not have to care about it ever. In this respect I consider a full blown garbage collector as closer to the ideal than automatic reference counting. But automatic reference counting is of course also a kind of automatic garbage collection mechanism, just with a bit of a different focus.
All solutions are not optimal, but unless you have really many allocations and dealloctions going on, have tight resource constraints or hard timing constraints, current garbage collectors are really good enough. Not that you can not shoot your feet by accidentally keeping large object graphs alive, but that is easy to detect, though sometimes hard to fix.
Some developers seem to have some kind of resentments against garbage collectors because real developers of course manage their resources on their own, but many of their arguments are not well founded. Garbage collectors nondeterministically pausing program execution to free unused resources is a common point, but they usually don't realize that automatic reference counting has the same problem - you never know when a reference count drops to zero and triggers the deallocation of the object possibly with a large object graph attached.
Not true. A reference counted program is deterministic, it is known in advance for a given set of inputs exactly when deallocation will occur.
Also, its not really fair to sweep "hard timing constraints" under the rug as a minority use case when these are essential for smooth modern GUIs.
Sure, GC is pretty much ideal in terms of absolving developers of resource management responsibilities, but it comes with some significant downsides which can easily effect the final product and should be understated. Rust is a great example of a new language which avoids GC for this reason.
That is a pretty theoretical scenario, at least I usually have no idea in advance what the user input will be. And garbage collectors pausing program execution is not a fundamental problem, there are garbage collectors operating entirely in parallel with normal program execution. Even further, garbage collection itself seem not to be the main source of those pauses, but heap compactification, which you also lose if you use automatic reference counting (or have to do manually).
I didn't say you had to know the inputs in advance. What's known in advance is when deallocation will occur, i.e. its deterministic. You know exactly when deallocation will occur.
Heap compaction is another part of GC which results in long non-deterministic pauses. But the real problem is having fragmentation in the first place, this is something which manual allocation strategies can go a long way to mitigate.
How can you know when a deallocation will occur? Did the user add this product to a single invoice and the product object will get deallocated when closing this invoice form or is there a second invoice still referencing the product? At best you can know when a dealloction might possibly occur, but this is whenever a reference changes or goes away and usually not pretty helpful.
For any sufficiently complex real world application you will not in general know when a deallocation occurs and that is one of the reasons you decided to use automatic reference counting in the first place. Some short lived objects within a function are non-issues to begin with and are easily handled manually if you want to, but the interesting bits and pieces are objects with unpredictable life time and usage pattern because they are heavily influenced by inputs.
When will a deallocation occur? When release is called and the referece count is zero. This is deterministic, unlike GC which will perform the deallocation when it feels like it.
I think you've confused determinism with regards to a program and the inputs to that program. Any retain/release program is deterministic because it always behaves in the same manner. Given any possible input I can tell you exactly when and on which lines of code the deallocations will occur. You can't know that with GC, especially when other libraries and threads enter the equation.
Which deallocations occur and in which order depend on the input to the program, but we still know in advance all possible lines of code where specific deallocations can occur and we have complete control over that. With GC we wouldn't know that and we couldn't control it.
Knowing when deallocations can occur and having control over that is the reason to choose ARC over GC in the first place. Not the other way around. With respect to manual retain/release, ARC doesn't take anything away: I can still retain an extra count of a large object to prevent its untimely release and share that object via ARC at the same time.
Two things. First, most GC doesn't actually deallocate anything, but that's a minor nitpick. The second, more important note, is that you are now describing a system where everything is garbage collected. This needn't be the case. In a language like Go, you have control over what get's stack allocated and what is placed in a GC. When that choice can be made, then you really can tell when given objects are deallocated. Of course, since the stack is a fixed size construct, I guess you don't really deallocate these things either...
It's also worth noting that GCs rarely does anything because it feels like it, but because you are allocating past a certain limit. Knowing the size of objects, and given that the allocator can provide you with the current size of allocated objects, you should be able to determine if a GC is likely to incur in the next lines of code. This determination becomes harder when threads enter the picture, unless you're using a GC scheme like Erlang, which have one GC per process (lightweight thread). Some GCs also allow you to pause and resume them, for when you really want to avoid GC cycles, this can of course lead your program to crash if you allocate past your limit.
> At best you can know when a dealloction might possibly occur, but this is whenever a reference changes or goes away and usually not pretty helpful.
After a while working with refcounting, I found myself writing code whose natural flow meant I was actually pretty sure, at least for deallocations large enough to care about. I don't even think about it any more, really, I just continue to enjoy my objects going away when I expect them to.
Of course you can know that, because you know the type of every object. If you're deallocating a big data structure, then the impact will be big. Its easy to choose not to deallocate such objects at an inconvinient time by holding references to them. Compare this with GC where the collection can occur at any time after deallocation, e.g. during an animation.
It is also possible to know, approximately, when a GC collection will take place, this is what GC profilers are for.
The thing is that most discussions tend to be reduced to RC vs GC, as if all GCs or RCs where alike, whereas the reality is more complex than that.
RC, usually boils down to dumb RC, deferred RC or RC with couting elision via the compiler. Additionally it can have weak references or rely on a cycle collector.
Whereas GC, can be simple mark-and-sweep, conservative, incremental, generational, concurrent, parallel, real time, with phantom and weak references, constrainted pause times, coloured....
As for frame drops during animations, I think pauses in a missile control system is not something one wishes for:
Sure, but then real-time systems are a very special use case with their own languages, tools and techniques. Real time GCs are severely limited and would not be suitable for use in a general purpose context.
You can do almost the same with a garbage collector - keep references alive until a convenient point is reached, than kill those last references and manually force a collection. Not that it seems a good idea, but if you need a level of control as described by you, then you are already actively fighting against automatic reference counting and there seems no point to use any kind of automatic resource management in the first place, at least for the relevant objects.
It's not possible to do that in Java, though it would work in C#. Retaining references really is not "fighting against automatic reference counting", it's the opposite: the reference clearly and unambiguously defines the lifetime of the object, including deallocation. This is by design and ARC can still be used for any shared references to the object. Forcing GC on the other hand, would be fighting against it because you're circumventing its usual operation.
I think technically you could achieve this by having multiple heaps, kind of like custom allocators in other languages.
If Java had a placement syntax like
new (HeapReference) SomeClass(); => allocate in custom Heap
Then you could just turn off GC for that heap, and then blow the whole thing away when it got full. Or, you could have a more fine grained API to allow GC on these custom heaps. Perhaps you could even allow copying objects back to the main heap. You'd have to ban cross heap references or have a smart API that lets users pin an object in the main heap while it is known to be referenced by an object in custom heap.
Most of the time in client applications, you only care about GC pauses on the main UI rendering thread, but GC pauses in other threads are less obvious because there's no jank, just that latency may go up for some operations.
You could have a kind of best of both worlds with languages that support non-GC allocation for your UI thread, but GC everywhere else.
It wouldn't work reliably in C# either. Both platforms allow you to inform the GC that now would be a good time to collect, but in both systems the GC can ignore your advice. C# has structs which will try to allocate on the stack, but that is potentially a big (or impossible) refactoring on your program (unless you plan ahead).
A GC cannot occur at any tie after deallocation. Well, I guess it depends on the collector really, but usually it happens on allocation. If you avoid allocations, you also avoid collections.
Garbage collectors don't solve the problem of "resource" management, but of memory management. Languages like Java have mechanisms such as try with resources (not available in Android) or try finally blocks which seem error prone to me, since the programmer has to know that that particular resource should be released explicitly.
They also make developers lazy, they get used to not thinking about object lifetimes and the costs of allocation. YMMV of course.
What is the reasoning behind that? Given the advances over the last decade or two automatic garbage collection is good enough for a lot of scenarios and this looks like a step back. Why not use automatic garbage collection as the default but offer additional means for manual resource management in case it is really needed?
Apple failed to produce a working GC that didn't crash all the time, because it required all Objective-C libraries to be compiled the same way.
Additionally the C subset only allows for conservative GC, which is not optimal when performance matters.
So in the end they went with ARC, with isn't nothing more than having the compiler produce the retain/release calls that Mac OS X frameworks were already expecting anyway.
So this has more to do with how technically feasible it is to have a proper GC in Objective-C, than GC in general.
IMHO, they want to get everything that has a UI as responsive as possible.
GC in theory is a good feature but it does not guarantee that constant time is used for each mark-and-sweep cycle. And together with multithreading that means uncertainty for rendering every frame of video, which is done in the main thread of an App 60 times per second. And for modern system, anything you see on screen is part of this rendering. Try to think about how could memory shared by main thread and a background thread be GCed. It could turn out to be: halting the process for GC, retaining some memory forever, or implementing some complicated algorithm that is too difficult to debug/maintain.
I've had to deploy lots of soft real time apps on GCed environments over the years, and it's always a problem. You can work around it with things like object pools, but some library or API will assume that the GC is OK and will be quietly spitting out objects continuously which will lead to a GC pause.
It's worth pointing out the Android devs finally started noticing this for Lollipop (probably due to their animations) and the API now has lots of places where it passes Java primitives instead of objects, which is the distinction between passing by value and by reference. Even if you're in C++ modern compilers can only make the most out of it if you pass by value, as this enables all sorts of other optimisations to kick in.
The key benefit of reference counting is it's predictable. Real time systems are also not strictly the lowest latency, they are defined by predictability. This becomes a preoccupation with minimising your worst case scenario.
Jellybean and Lollipop didn't get better smoothness by replacing objects with primitives, the Android API is fixed by backwards compatibility requirements. They did it through a mix of better graphics code and implementing a stronger GC.
If you look at the most advanced garbage collectors like G1 you can actually give them a pause time goal. They will do their best to never pause longer than that. If pauses are getting too long they increase memory usage to give more breathing room. If pauses are reliably less, they shrink the heap and give memory back to the OS.
Reference counting is not inherently predictable and can sometimes be less predictable than GC. The problem with refcounting is it can cause deallocation storms where a large object graph is suddenly released all at once because some root object was de-reffed. And then the code has to go through and recursively unref the entire object graph and call free() on it, right at that moment. If the user is dragging their finger at that time, tough cookies. GC on the other hand can take a hint from the OS that it'd be better to wait a moment before going in and cleaning up .... and it does.
It gets even worse when you consider that malloc/free are themselves not real time. Mallocs are allowed to spend arbitrary amounts of time doing bookkeeping, collapsing free regions etc and it can happen any time you allocate or free. With a modern GC, an allocation is almost always just a pointer increment (unless you've actually run out of memory).
The problem Apple has is that their entire toolchain is based on early 1990's era NeXT technology. That was great, 25 years ago. It's less great now. Objective-C manages to excel at neither safety nor performance and because it's basically just C with extra bits, it's hard to use any modern garbage collection techniques with it. For instance Boehm GC doesn't support incremental or generational collection on OS X and I'm unaware of any better conservative GC implementation.
Some years ago there was a paper showing how to integrate GC with kernel swap systems to avoid paging storms, which has been the traditional weakness of GC'd desktop apps. Unfortunately the Linux guys didn't do anything with it and neither has Apple. If you spend all day writing kernels "just use malloc" doesn't seem like bad advice.
Objective-C uses autorelease pools so deallocation doesn't happen immediately when the refcount goes to zero. Its reference counting implementation is smarter than a simple naïve one.
Apple's GC implementation wasn't a Boehm GC [1].
It's true that it's hard to use a tracing GC with Objective-C, because of the C. But, if you want interoperability with C, you're kind of stuck.
> The problem with refcounting is it can cause deallocation storms where a large object graph is suddenly released all at once because some root object was de-reffed
This only happens if you choose to organize the data this way. This is a big difference from GC, where the whole memory layout and GC algorithm is out of your control.
Depends on the language. Go, for instance, gives you a lot of freedom when it comes to memory layout, and allows you stack allocate objects to avoid GC.
Im not arguing in favor of GC. The argument was that a GC takes away memory control, but memory control is up to the language.
Go doesn't restrict you to stack/heap allocation. You can create structs which embeds other structs. This simplifies the job the GC has to do, even if you don't allocate on the stack.
You can do something similar with Struct types in C#.
Better graphics code does mean what I'm on about. It's about removing any triggers for GC, which means removing allocations.
If you're blocking your UI thread with deallocating a giant graph of doom then you have other problems. Deferring pauses, however, is not a realistic option.
I was referring to the triple buffering, full use of GL for rendering and better vsyncing when I talked about graphics changes, not GC stuff. That was separate and also makes things smoother but it's unrelated.
Deferring pauses is quite realistic for many kinds of UI interaction and animation. If your animation is continuous/lasts a long time and requires lots of heap mutation then you need a good GC or careful object reuse, but then you can run into issues with malloc/free too. But lots of cases where you need something smooth don't fit that criteria.
It's worth pointing out the Android devs finally started noticing this for Lollipop (probably due to their animations) and the API now has lots of places where it passes Java primitives instead of objects, which is the distinction between passing by value and by reference.
This is simply not true. The API has always been heavily based on Java primitives. They didn't even use enums in the older APIs, preferring instead int constants. (I hate that one personally). GC pauses have always been a point of focus for the Android platform.
Notice the introduction of methods with API level 21 that recreate existing functionality without RectF objects being allocated. Touch events, for example, still spit ludicrous amounts of crap on to the heap.
All this is why the low latency audio is via the NDK as it's basically impossible to write Android apps which do not pause the managed threads at some point. Oddly this is stuff the J2ME people got right from day one.
That's terribly ugly. Can they not do escape analysis or something to avoid allocations in obvious places? Or only allocate when the value is moved off the stack?
Doesn't Android use a different flavor of Java anyways, allowing them to make these changes?
Yes, it's possible in theory, but Dalvik/ART don't do it. HotSpot does some escape analysis and the Graal compiler for HotSpot does a much more advanced form called partial escape analysis, which is pretty close to ideal if you have aggressive enough inlining.
The problem Google has is that Dalvik wasn't written all that well originally. It had problems with deadlocking due to lock cycles and was sort of a mishmash of C and basic C++. But then again it was basically written by one guy under tight time pressure, so we can give them a break. ART was a from scratch rewrite that moved to AOT compilation with bits of JITC, amongst other things. But ART is quite new. So it doesn't have most of the advanced stuff that HotSpot got in the past 20 years.
Yes, this is why I always find sad that language performance gets thrown around in discussions forums without reference to what implementations are actually being discussed.
Reference counting does not provide any guarantees when objects get deallocated on its own either, any removed reference may make the counter zero and trigger a deallocation. It may of course be worse with a full blown garbage collector building up a huge pile of unused objects and then cleaning them up all at once. But that is not a necessary limitation, there are already garbage collectors performing the entire work in parallel with the normal application execution.
Objective-C uses pools, so deallocation doesn't happen automatically when the reference count hits zero. Apple's reference counting implementation is fairly smart.
Over on the Reddit discussion there was a comment from Ridiculous Fish who at least was an Apple developer (and probably still is) and worked on adding GC to the Cocoa frameworks,
Basically, because of interop with C, there's only so much you can do. Plus, the tracing GC wasn't on iOS so if you want unified frameworks (for those that make sense cross-platform), supporting the tracing GC along with ARC is added work.
my understanding: they have abandoned garbage collection in Obj-C years ago and think ARC is overall better.
And my guess: GC probably doesn't play nice with recent/future changes in OSX memory management (e.g. compressed memory) so the less apps use it the better the whole system is overall.
Hence, they probably want to get rid of the whole thing ASAP, forbidding it _for apps submitted to the mac app store_ is a reasonable step in that direction.
Probably because Apple is moving towards a unified iOS/OSX SDK. We have seen evidence of this with the Photos app which uses UXKit (unified AppKit/UIKit). I would imagine that Apple is pushing developers towards this new SDK landscape step by step starting with ARC.
There is a precedence for Apple doing this usually related to a new hardware platform e.g. x86, 64-bit and perhaps ARM Macs.
I find that a good thing, because developers are trained to think that they have to use weak references and think about object lifetimes.
Android developers tend to not think so much about lifetimes which leads to memory leaks because Dalvik apparently can't figure out some of those cycles by itself. Just do a search for "Android memory leak" to see some examples.
Try to parse any tree structure without creating cycles (say the XML/XHTML inside an ebook) - that may be possibly with just one or two devs on the project but as soon as every dev doesn't know every line of code there is an oppertunity for errors and you only need one link to leak memory.
That Dalvik can't handle the cycles is stupid but an issue with Dalvik (that is likely to be phased out with the new runtime) not an issue with GC. Computers are really, really, good about executing trivial tasks repeatedly without ever making a stupid mistake, humans not so.
And as Terminator taught us: never send a human to do a machines job.
And time constraints are also more of an illusion than real - unless your program has no input from the outside world you can not really know when a reference count will drop to zero and triggers a deallocation. Every time a reference goes away may be the time an (unexpected) deallocation happens.
With one difference: the reference count is usually stored inside the Object's pointer (taking advantage of the fact that allocations are 16-byte aligned so the lowest 4 bits are otherwise idle). This means that there's no storage overhead on the reference count.
Weak references don't primarily exist to deal with cycles, and cycles generally often involve necessarily strong references even in systems that support weak references. (That is, e.g., cycles where A needs to be reachable if B is reachable and vice versa, such that a weak reference that allowed B to be disposed while A was reachable or vice versa would not be acceptable. A system that can detect cyclic garbage will still be able to collect both A and B -- connected by strong references -- when neither can be reached from the rest of the program.)
The standard use case for weak references in Objective-C is a child-to-parent reference. The parent holds a strong reference to the child and children hold weak references to their parents, avoiding potential cycles.
In fact, part of good memory management in a reference counted system is that you should always have a hierarchy to your data structures so it never makes sense to have an actual strong reference loop.
So then you play pass-the-strong-reference in a destructor - i.e. if A goes out of scope the A->B ref is weakened and the B->A ref is strengthened, then vice versa if required.
We did this for DBIx::Class in perl, thereby keeping full liveness but still getting timely destruction for connection objects. It was a trifle insane to get right, but it works extremely well.
Sort of, CPython has both reference counting and a normal Garbage Collection. You can disable the latter and still avoid memory leaks as long as you don't have cycles
I wonder how many applications will start leaking memory as a result of this? Cyclic references are not that uncommon in many algorithms, and identifying the need to remove these will be challenging for applications which relied on automatic identification will take time.
When it comes to overall ease of use, the importance of tooling dwarfs the importance of GC vs RC, and apple has had really solid RC tooling for a while. It graphs memory usage, searches out leaked objects (by injected GC or by diffs), lists them, lists the retain/release events with timestamps, and lets you jump to the responsible code with a single click. Compare that to the nightmare of trying to get a misbehaving GC to cooperate and suddenly it doesn't seem so crazy.
EDIT: Or at least they had such tools 5-6 years ago, regression is certainly possible. For unrelated reasons I've been playing in other playgrounds for the last 5-6 years so I can't say for sure.
Xcode can help with that, as it says in the "Transitioning to ARC Release Notes":
> To aid in migrating existing applications, the ARC migration tool in Xcode 4.3 and later supports migration of garbage collected OS X applications to ARC.
This is a pity. I have an app that uses ARC on 64bit, and GC on 32bit. Allowed me to ditch manual memory management without dropping support for 32bit machines.
The most recent version of Microsoft Office for OS X is still 32-bit, still uses the deprecated event system, and still uses deprecated file I/O functions ... Remember resource forks?
If I wasn't clear, my question was the opposite, not is there still demand to run 32-bit software, but is there demand for new 32-bit software from people who can't run 64-bit.
I've assumed anybody with a almost 10 year old computer doesn't get that much new software. I tend to release supporting the last 2 or 3 OS releases and 64-bit only and haven't received any complaints.
I did indeed misinterpret. To answer that way, I'm not personally aware of anyone demanding new apps targeted to 32-bit on OS X. As an OS X developer, I'm averse to the idea of starting a new app with that as a target.
I still use my 2006 iMac occasionally, so I care. Macs live a lot longer than phones.
However, you are right, there is little demand for 32bit software; my newer apps require 64bit. But I see no reason to cut off support in an old product.
No. You can write it in whatever you want (even on iOS these days) though you can't depend on any third party pre-installations. IOW you need to be able to vendor in your 3rd party language and library dependencies, and use a thin Objective-C wrapper to make the whole thing natively executable.
This announcement is more about a feature in Objective-C being actively discouraged by refusing to distribute your app on their store. You can still use it if you distribute your app on your own.
I thought they just banned the use of code not embedded in your app. You are free to Jit. Just not files downloaded from the net. Only files embedded In your app
Well, UIWebView (the standard web view, pre iOS8) does not have Nitro, but WKWebView (introduced in iOS8) has Nitro. The only problem is that while they are similar, the WKWebview API is not feature complete with UIWebView, leading to a lack of adoption from big players such as Chromium.
Pretty sure no JIT. Because the Mono guys had a few compat issues due to requiring ahead of time compilation for iOS. If JIT was an option then it'd certainly be used.
Awesome. Now all they gotta do is stop treating strings like we're in a Pascal Intro to Computer Science class circa 1987 and we gots ourselves a kick ass development environment here. :)
What the fuck? This is a really bad technical decision. As written, it sounds like they're planning to reject garbage collection in-general, which would be a ban on most major programming languages. If you assume they only mean garbage collection in Objective C, it's still a head-scratchingly stupid decision. Modifying an app to not use garbage collection is, in many cases, a major project that will introduce a lot of bugs. And the benefit is... very dubious at best.
I'm pretty sure this notification is only meant for apps using the Objective-C garbage collector, as the notification mentions that the GC was deprecated a while ago. I don't think a lot of Mac developers used the Objective-C GC anyway.
Finally Apple mentions the following in it's migration document [0]:
Is GC (Garbage Collection) deprecated on the Mac?
Garbage collection is deprecated in OS X Mountain Lion v10.8, and will
be removed in a future version of OS X. Automatic Reference Counting is
the recommended replacement technology. To aid in migrating existing
applications, the ARC migration tool in Xcode 4.3 and later supports
migration of garbage collected OS X applications to ARC.
Based on the above statement, my guess is that's Xcode has some build-in tools to convert GC code to ARC code, making the transition easier.
Perhaps, but I see no reference to 'Objective-C apps', the current wording of this summary is broad enough to include other languages as well. Do you have additional information that exempts Java, Mono, etc...?
All Mac native apps need an Objective-C Wrapper at minimum to give them an executable entry point (all apps are actually directory structures). You also on the App Store need to vendor in your dependencies. There's no language restriction in play. You can even still use the old GC if you want - just not on their store. The Mac App Store isn't the only game in town.
Er, what? Transitioning from GC to ARC doesn't make code cleaner or reduce bugs. It eliminates GC pauses, which is a slight speedup, but the set of bugs that a garbage-collected program has are roughly a subset of the bugs that a reference-counted program can have.
Apple NEEDs to do this for their own sanity. I'm surprised they waited this long actually.
It's a serious PITA to support GC and non GC at the same time in a framework. Having to add finalizer methods instead of relaying on dealloc methods and dealing with non-deterministic object lifetime as well as deterministic was a huge problem.
ARC doesn't have these problems because it wraps traditional manual reference counting (to pick nits, it actually bypasses it when it can safely but this is an optimization detail that you shouldn't need to worry about).
To Apple and 3rd party framework developers, it's painful to support GC and non-GC paths in your code at the sametime. It's not really possible to use a non-GC compatible framework in a GC app.
It's also impossible to use ARC and still support the GC as well in the same framework. Apple has been completely prevented from using ARC in it's own frameworks because of keeping GC compatible apps working, and making this change will allow them to start ARC-ifying their own code.
It also means that a significant chunk of the ObjC runtime can be simplified.
There are also some great advancements in the ObjC runtime that Apple is doing (like tagged pointers and shadow objects) that the reference boehm garbage collector can't really deal with and was all completely disabled in the ObjC runtime when you had a GC app.
This is a good thing. Not many app actually shipped with the ObjC garbage collector and ran well.
Now you are free to use your GC for your own needs in your app. That has nothing to do with ObjC. Feel free to add Java or Ruby or whatever language you like to your own app and submit that to the Mac App Store.