Hacker News new | past | comments | ask | show | jobs | submit | more ludwik's comments login

> The appeal of Go is that it has just the bare minimum feature set to be useful for modern backend engineering

I know that many people love Go, and I respect that, but I was never able to grasp its appeal (despite my two-year stint as a professional Go programmer). To me, the philosophy of Go seems to prioritize simplicity in the language at the cost of making software written in it more complex. Writing in Go often felt like a constant exercise in haveing to reinvent even the simplest things.


I suspect you've only seen poorly written Go, which is actually quite pervasive. The most common sign of it is the overuse of interfaces. You basically shouldn't define your own interface unless you need to export functionality. Interfaces are not a tool for organizing your code. Go is meant to be written in a very linear and naïve way, and it turns out quite good when you just do that.


Maybe I'm missing something obvious but that seemed to be the only option for injecting stub implementations in integration tests.


I prefer the testing libraries that do monkey-patching and consider mock implementations of interfaces to be an antipattern in Go, especially if it's the only reason for putting an interface between the calling code and a type.


Interfaces are one way to inject stubs, and a good way, but not the only way. You can also use function values as callbacks.


In Go you can jump from your IDE into the http server implementation in 3 clicks, and you can read the code easily, it's vastly different from other modern languages that have layers of un-necessary abstraction.

I worked in Java / C# with tons of interfaces all over the place, getter / setter in every files.


That's just bad code, but it's not inherent in the language. You can write bad code in any language.


They are considered people only when it comes to rights, but not when it comes to restrictions and consequences.


Not only is it useful, but it gets straight to the point by presenting exactly what is needed in this context (“it is not synthesized by our bodies”) without getting bogged down by explanations (“because it is an element”). It efficiently delivers the required information. You may not personally like the style, but that doesn’t mean there’s anything wrong with it.


> I didn’t die of being trampled by unicorns either

I think this comment is incredibly telling. Many people tend to treat problems that do not currently affect them because of the momentous, coordinated efforts of many individuals and institutions the same as problems that do not affect them because they are naturally nonexistent.

There is a huge difference between these two categories of problems. The first will become very visible when the constant behind-the-scenes work is no longer maintained. The second will not. Confusing these two seems to be one of the causes of the mess we currently find ourselves in.

> How do the billion people in Europe do it?

As a European, I can help with the conundrum: we DO have central governments, and they tend to take more responsibility for taking care of people than the U.S. federal government has ever been allowed to. Governments don't have to be continent-wide to exist.


That’s not deep linking (i.e., something set up by the person posting the link). It’s simply the specific device being configured to handle regular web links differently.


I know what you're referring to. However there's no ability on iOS to control this behavior.


There is but it's not intuitive. Long press on a link and you can open it without it being sent away and apparently this sticks.


But that's not OP deep linking to an app - it's just a normal website URL. It's you having your user agent configured to launch an app for this particular domain. Nothing that OP can do about that.


Even if we set three-value logic aside for a moment, this behavior of NULL still makes sense intuitively.

The value of NULL in a particular table cell is simply a way to indicate 'no value'. If you want the values in a column to be unique, cases where there are no values shouldn't be considered.

This plays out similarly in practice. For example, you may want to allow users to optionally reserve a username, and if they do, those usernames should be unique. It's hard to imagine a use case where by wanting a field to be both optional (nullable) and unique, you mean that the field should be optional for a single record (!) and required for all the rest. Of course, you mean that IF there is a value, THEN it should be unique.


But what does this have to do with reasoning? Yes, LLMs are not knowledge bases, and seeing people treat them as such absolutely terrifies me. However, I don’t see how the fact that LLMs often hallucinate “facts” is relevant to a discussion about their reasoning capabilities.


"Hallucinating a fact" that isn't in the training set and is also illogical, is exactly what a failure to reason correctly looks like.


Reasoning involves making accurate inferences based on the information provided in the current context, rather than recalling arbitrary facts from the training data.


Yes, that's what I said. The whole point of hallucinations is that they aren't "arbitrary facts recalled from the training data". They represent attempts to synthesize (i.e., infer) new facts. But because the inferences are not accurate, and because the synthesis process is not sound, the attempt cannot be called reasoning.

It is equally possible to "reason" about things you already know, as about things you've just been told. In fact, the capacity to speculatively, without prompting attempt such reasoning is a big part of cognition.


> But if all models were truly open, then we could simply verify what they had been trained on

How do you verify what a particular open model was trained on if you haven’t trained it yourself? Typically, for open models, you only get the architecture and the trained weights. How can you reliably verify what the model was trained on from this?

Even if they provide the training set (which is not typically the case), you still have to take their word for it—that’s not really "verification."


> Even if they provide the training set (which is not typically the case), you still have to take their word for it—that’s not really "verification."

If they've done it right, you can re-run the training and get the same weights. And maybe you could spot-check parts of it without running the full training (e.g. if there are glitch tokens in the weights, you'd look for where they came from in the training data, and if they weren't there at all that would be a red flag). Is it possible to release the wrong training set (or the wrong instructions) and hope you don't get caught? Sure, but demanding that it be published and available to check raises the bar and makes it much more risky to cheat.


If they provide the training set it's reproducible and therefore verifiable.

If not, it's not really "open", it's bs-open.


The OP said "truly open" not "open model" or any of the other BS out there. If you are truly open you share the training corpora as well or at least a comprehensive description of what it is and where to get it.


It seems like you skipped the second paragraph of my comment?


Because it is mostly hogwash.

Lots of ai researchers have shown that you can both give credit and discredit "open models" when you are given a dataset and training steps.

Many lauded papers fell into reddit Ml or twitter ire when people couldnt reproduce the model or results.

If you are given the training set, the weights, the steps required, and enough compute, you can do it.

Having enough compute and people releasing the steps is the main impediment.

For my research I always release all of my code, and the order of execution steps, and of course the training set. I also give confidence intervals based on my runs so people can reproduce and see if we get similar intervals.


To me, it sounds more like claiming that AI writes better code than Linus Torvalds because a group of random non-programmers preferred reading simple AI-generated Python over Linux kernel C code.


Not sure that I agree with this metaphor, as the utility of code is both subjective (readability) and objective (performance). There are objective ways in which C code simply can’t be matched by a Python implementation. If they were equivalent in terms of performance, I think use of C would crater.

With poetry, the “utility” is entirely the subjective experience of the reader. There is no objective component.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: