Hacker News new | past | comments | ask | show | jobs | submit | kelthan's comments login

Since the '90s. While the preferred form is '\', '/' was allowed as an alternate path separator.


What I really want is this induction stove like this but that also has a single gas burner. Induction is great, but there are some things you can't do with it, but it covers 90% really well.

I guess the alternative is to get a portable high-output single burner butane stove that I can pull out when I want to arroser a steak, for example.


This solution doesn't do anything to prevent leaking memory in anything but the most pedantic sense, and actually creates leaks and dangling pointers.

The function just indirects malloc with a wrapper so that all of the memory is traversable by the "bigbucket" structure. Memory is still leaked in the sense that any unfreed data will continue to consume heap memory and will still be inaccessible to the code unless the application does something with the "bigbucket" structure--which it can't safely do (see below).

There is no corresponding free() call, so data put into the "bigbucket" structure is never removed, even when the memory allocation is freed by the application. This, by definition, is a leak, which is ironic.

In an application that does a lot of allocations, the "bigbucket" structure could exhaust the heap even though there are zero memory leaks in the code. Consider the program:

  int main(int argc, char** argv) {
    for (long i = 0; i < 1000000; i++) {
      void *foo = malloc(sizeof(char) * 1024);
      free(foo);
    }
    return 0;
  }
At the end of the million iterations, there will be zero allocated memory, but the "bigbucket" structure will have a million entries (8MB of wasted heap space on a 64-bit computer). And every pointer to allocated memory in the "bigbucket" structure is pointing to a memory address previously freed so now points to a completely undefined location--possibly in the middle of some memory block allocated later.

There are already tools to identify memory leaks, such as LeakSanitiser https://clang.llvm.org/docs/LeakSanitizer.html. Use those instead.


> There are already tools to identify memory leaks, such as LeakSanitiser https://clang.llvm.org/docs/LeakSanitizer.html. Use those instead.

Clearly the author of TFA is aware of such tools, since the idea is to trick them.


that's the joke...


it's not funny, it's not obvious and it's wasting a lot of people's time. ha. ha. ha.


It's quite obvious...

"If you don’t call free, memory usage will increase over time, but technically, it’s not a leak."


there is plenty of code that is designed not to free at all because the program will not run long enough to generate much garbage. and having the whole process shut down is simpler than unwinding everything perfectly. so, no, it's not obvious. I had no trouble reading what the code did (and didn't) but it's not obvious that the author understood it. There are plenty of people here who need that much help with C


Its painfully obvious.


you haven't had to fix bugs in other people's code enough, this could be an actual piece of code, which makes it not distinguishable as a joke


Uhm... woosh!

You know this isn't serious right?


I just hope ChatGPT could see it was a joke


I posted the post into that and it said this:

While the provided code may seem like an interesting approach, it's important to note that it introduces a number of issues and potential pitfalls. This code is an attempt to intercept the malloc function using the dlsym function from the dlfcn.h library and store every allocated pointer in a linked list called bigbucket. However, there are several problems with this solution:

    Portability: This code relies on the dynamic linking functionality provided by the operating system. It may not work on all systems or with all compilers.

    Concurrency Issues: This solution is not thread-safe. If the program uses multiple threads, concurrent calls to malloc may result in race conditions and data corruption in the bigbucket linked list.

    Incomplete Solution: This code only intercepts calls to malloc. If the program uses other memory allocation functions like calloc, realloc, or custom memory allocators, memory leaks may still occur.

    Performance Overhead: The code introduces additional overhead for every memory allocation, potentially affecting the program's performance.

    Undefined Behavior: Overriding standard library functions like malloc can lead to undefined behavior. The behavior of the program is no longer guaranteed to be consistent across different platforms or even different runs.

    Limited Practicality: While this approach technically prevents memory leaks by keeping track of all allocated pointers, it does not address the root cause of memory leaks, which is the failure to deallocate memory when it is no longer needed. Encouraging developers not to free memory is not a good practice and can lead to inefficient memory usage.
A better approach to avoiding memory leaks is to adopt good programming practices, such as carefully managing memory allocation and deallocation, using automated tools like static analyzers and memory debuggers, and, when applicable, leveraging programming languages with automatic memory management (e.g., garbage collection in languages like Java or Python).


The last part is funny. Even if you are technically not leaking memory you can still have pretty much the same end result in garbage collected languages if you mess up. The language might be tracking the objects but it can't know if you don't need them anymore.


Why isn't it serious? I can imagine a smart pointer that actually accomplished the stated goal. So the joke is that they took a good idea and made a rubbish solution?


> I can imagine a smart pointer that actually accomplished the stated goal.

Can you elaborate on that?

If it requires you to free correctly, then you can just use plain C and the smart pointer isn't accomplishing anything. If it doesn't require that, how does it work?


I dunno how people are missing the point so much.

Traditionally memory is considered "leaked" if it is still allocated but nothing point to it; i.e. there's no way to navigate to the allocation anymore.

He has made a joke "solution" by simply permanently storing a second pointer to all allocations so that by this definition they never technically leak. You can still always navigate to ever allocation so no allocation has leaked.

Of course it's not a real solution because it doesn't actually change the memory characteristics of a leaky program; it just hides the leak. In other words the technical description of the leak above isn't really the thing we care about.

Seems like almost nobody here got that.


smart pointers in C?


How do we sign up for the beta? This provides information on how to install Test Flight, but doesn't provide a link to get an invite once installed.


The one thing to keep in mind when reading this article is that the malloc/free API was built when many of the issues that are identified in this article (alignment, sizing, etc.) were not issues. Until the early 80's computer memory was the limiting cost factor for most machines, and the memory models were fairly simple. Alignment to something like a page boundary was something that was completely unreasonable/unfathomable, since wasting that much RAM would have been extraordinarily wasteful.

Modern CPUs have more memory built in to L1, L2, and L3 caches than most computers had in total working memory until memory started getting cheap.

I agree with the author that has lead to some "sharp edges" for the API, but that is a normal growth pattern. I'm not saying that the API doesn't have to change, but I did want to make sure that people understand that the creators of the old APIs were not stupid, they just faced different constraints than we do now. And that led to API "shapes" that may be surprising and frustrating to use now, but made perfectly good sense at the time.


First off, let me give a shout out to the author of the article. It's quite well written with clear support for the answer he provides.

Now back to the thread:

It turns out that most people expect "random" to mean a random selection without duplication (at least until the source is exhausted). That is called a non-replacement randomization: once a song (or comic, or whatever), is played/displayed, that item is no longer considered as part of the pool for future selection. However, that requires saving state for the individual user to save which information has been presented to this specific user, which adds a whole lot of additional requirements for cookies, or account registration, or other things that we all generally loathe.

The fundamental problem here is that most people don't really understand randomness and probability. If they did, casinos and lotteries would be out of business (see The Gambler's Fallacy[1]). This is not a failure of education, or mental capabilities: it is a fundamental friction with the way that the human brain has evolved.

The human brain is fundamentally a pattern matching system. We look for "meaning" by identifying patterns in our world and extrapolating what actions we should take based on those patterns. As such, we assume that _all_ systems have memory because that's how humans learn and take action, so we generally assume everything else does, too. But truly random events have no memory: there are streaks that appear "non-random" to us, such as multiple tails occurring in a streak during a fair-coin flip. But streaks often occur in truly random data, we as humans just don't expect it.

The existence of the Feynman Point[2] is an example that even someone well versed in randomness and math thought that a string of six 9's appearing in the value of PI, an irrational number, was something worth noting.

[1]: https://www.investopedia.com/terms/g/gamblersfallacy.asp [2]: https://en.wikipedia.org/wiki/Six_nines_in_pi


> If they did, casinos and lotteries would be out of business

I think you misunderstand the reason many people gamble if you are so sure of this.

(But I agree people also don’t really “get” randomness).


Not really. It makes it harder, since the warrant would now have to include the cell-provider(s) for your mobile device(s), which would all them to geofence you by tower location or the HTTP headers (which Google could also be compelled to provide). And they would need to correlate the various data to be able to get a rough idea of where you were, as opposed to getting that data in one step from Google.


If the government is providing healthcare, as it does in many countries, then they have it already.


I was adopted. I have no idea who my biological parents were or what genetic risks I might have inherited from them. When the doctor asks "Has anyone in your family ever had <fill in the blank>?" I have no answer to those questions without a genomic test.


There is a difference between genomic data and biometric data: biometric data has a known potential exploit vectors. So, with a picture of your retina, a sophisticated adversary could potentially reproduce your retina to allow access to some secure facility.

Genomic data doesn't have the same risk factors--at least at the moment. I think that the point many are trying to make here is that there may be risk vectors available at some point in the future that aren't known now. A couple of theoretical examples:

* You had to give a blood sample rather than other biometric data like a retina scan.

* Spoofing DNA evidence. That would be very/prohibitively expensive/difficult at the moment, but I suppose could become as easy as 3d printing at some point in the future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: