Hacker Newsnew | past | comments | ask | show | jobs | submit | mrazomor's commentslogin

Heavy use of C++ templates can significantly increase the binary size. Same for the heavy use of generated code (e.g. protocol buffers etc.).


IIUC, the Swiss German can't make a cut as there's no standard written form (and with it, not much resources), and the variations between the cities are pretty significant.


There really isn't a single "Swiss German" dialect. It is rather a family of dialects, and this family is again part of the larger family of "Alemannic German" dialects, which are spoken in most of southwestern Germany, Switzerland and western parts of Austria [0]. It is really very hard to clearly demarcate "Swiss German" from dialects spoken for example in the Black Forest, around the city of Freiburg im Breisgau, in Vorarlberg or even (historically) in Alsace. My own dialect is Swabian (also Alemannic), and I never had trouble understanding the local dialects around Basel, Berne or Zurich. It is easier for me to understand these Swiss German dialects than, for example, Bavarian dialects.

[0] https://en.wikipedia.org/wiki/Alemannic_German#/media/File:A...


As per some folks I've met (I live in french Romandie, almost 0 variation here they just speak slower than french), for ie folks from Zurich its almost impossible to understand folks from Bern. And thise are 2 big cities pretty close to each other, not some remote mountain valley.

But they can easily switch to more modest verion or even high german if needed.


> for ie folks from Zurich its almost impossible to understand folks from Bern. And thise are 2 big cities pretty close to each other, not some remote mountain valley.

This is absolutely not true. Bern is the capital and many people travel there for work or other reasons. It's also a dialect very heavily featured on TV (e.g. I remember there was a weather reporter from Bern, don't know if she still does this), a lot of famous politicians are/were from Bern (e.g. former Federal Council member Adolf Ogi) and many famous musicians also sang/sing in this dialect (Mani Matter, Züri West, Gölä, etc.)

Almost all Swiss dialects are mutually intelligible simply due to the high level of exposure to the diversity (and also their relative similarity). There are some people who don't understand Walliserdeutsch well, because it's less represented and also linguistically more removed from the rest - but even that's something you get used to quickly.


Alemannic is still spoken in Alsace. Albeit it has some of the same issue your listed: no standard written form (Hochdeutsch was used for that) and wide difference even between close villages. In particular, Northern and Southern varieties have a different vocalic systems.


Similiar to Slovenian - we have 400 dialects, grouped in 7 larger groups based on similarity. Given that there is only like 2 million speakers that may feel like a large number, but it's a consequence of rather hilly geography.

Differences between some of them are rather extreme, especially Prekmurje dialects feel like their own language - so we need to fallback to "book" Slovenian when talking with people from different regions.


I have zero experience in sales and marketing, but I was wondering how I would start if I had to do that. After reading a bit about it and googling around, I stumbled upon Alex Hormozi and his YouTube channel: https://youtube.com/@alexhormozi

It has a lot of content & good presentation (minimalistic, but crisp). All about sales and marketing. I have no idea how an experienced sales or marketing person would rate it, but to me it sounds reasonable and useful. Give it a try (e.g. try this video: https://youtu.be/FMzKk73iUhw -- his videos have clickbaty titles, and are pretty long -- don't let that discourage you).

He also has two books (sales and marketing), but I didn't read them yet. The reviews say that they are somewhat basic. I'll still give them a try.


The summary in this article is golden!


I was in a similar discussion some 10y ago. After a few rounds, we concluded that for a really good reliability you could do the following:

- no service is expected to last long enough or to keep your data safe,

- a physical medium is the way to go,

- create 3 backups stored in different location,

- use a different brand for each backup,

- every 10y review the backups: check the data degradation, rebackup on a new medium if the current one is getting phased away, reencode the content if the format is becoming obsolete & hard to open. [IMO, this is the key point]

Getting the old works of a predecessor on a physical medium is a really good feeling (I know how I felt when I'd discover the old notebooks from my father, uncle, etc.). Based on my experience and Internet, CD-R seems to be a good choice if the data volume allows. But it's getting slowly phased away. (fun fact: a few months ago I found the first CD I burnt -- works flawlessly (although, no checksums checked) after 25y)


This assumes the common resources (CPU, RAM, etc.), not the ones required for the LLM training (GPU, TPU, etc.). It's different economy.

TL; DR: It's not ~free.


Why does GPU matter? Do you think GCP keeps GPU utilization at 100% at all times?


What the OP is referring to requires overprovisioning of the high priority traffic and the sine-like utilization (without it, the benefits of the "batch" tier is close to zero -- the preemption is too high for any meaningful work when you are close to the top of the utilization hill).

You get that organically when you are serving lots of users. And, there's not much GPUs etc. used for that. Training LLMs gives you a different utilization pattern. The "best effort" resources aren't as useful in that setup.


Because accelerators (tpus, gpus) unlike ram/cpu are notoriously hard to timeshare and vitrualize. So if you get evicted in an environment like that, you have to reload your entire experiment state from a model checkpoint. With giant models like that, it might take dozens of minutes. As a result, I doubt that these experiments are done using "spare" resources - in that case, constant interruptions and reloading would result in these experiments finishing sometime around the heat death of the universe :)


In my case, it was the integration testing framework built for a large Python service.

This was ~10y ago, so my memory might not serve me well. A bit of context:

- proprietary service, written in Python, maaany KLOC,

- hundreds of engineers worked on it,

- before this framework, writing the integration tests was difficult -- you had a base framework, but the tests had no structure, everyone rolled out their own complicated way of wiring things -- very convoluted and flaky.

The new integration tests framework was build by a recently joined senior engineer. TBF, it's wrong to say that it's was a framework, if you think in the xUnit sense. This guy built a set of business components that you could connect & combine in a sound way to build your integration test. Doesn't sound like much, but it significantly simplified writing integration tests (it still had rough edges, but it was 10x improvement). It's rare to see the chaos being tamed in such elegant way.

What this guy did:

- built on top of the existing integration tests framework (didn't rollout something from zero),

- defined a clear semantic for the test components,

- built the initial set of the test components,

- held a strong ownership over the code -- through the code review he ensured that the new components follow semantics, and that each test component is covered by its own test (yep, tests for the test doubles, you don't see that very often).

Did it work well longterm? Unfortunately, no. He stayed relatively short (<2y). His framework deteriorated under the new ownership.

Travis, if you are reading this and you recognized yourself, thank you for your work!


I wonder if this is a tragedy of "the org wasn't ready for his solution" or more of a "if I were to go back now, I'd notice it wasn't so good."


Without knowing anything, and before reading this comment, I had the feeling of nr. 2. Strong feeling. I think the reason is „many klocs of Python“. I have developed alergy to big Python programs. I like python for small things, probably wrapping some C code.


What languages do you personally prefer for many KLOCs of code?


Why would KLOC in Python be worse than in any other language?


It's about the different ways the language allows you to shoot yourself in the foot.

I worked on large Python, C++, Java & Go services. I have 10y+ of experience with the first 3. C++ allows you to write incomprehensible code (even to the experienced C++ devs) and justify its existence (because of the performance gains). But you need to be a top expert to write a compileable code of that type. I'm comfortable with diving in any C++ codebase except for the libraries like std, boost, abseil, folly, etc. Most of the code there is absurdly difficult to comprehend.

On the other hand Python leads in the ways a junior dev can introduce hell in the code. Especially if the team doesn't rely on the strict type check. I have seen horrors.

I was bewildered when I realized that working with JavaScript with type checks (Closure compiler) was insanely more productive and smooth than working with Python (before the type checks).

That's why Java won the enterprise world. It takes an effort to make a mess in Java (but people still manage). Go is in a similar place.


Because you're doing more work per line.

A 20 KLOC Python program is not more complicated than the 100 KLOC C program performing the same function. Quite the contrary. But if you're comparing 100 KLOC of C to 100 KLOC of Python, the Python program may seem unwieldy.


So 500 KLOC of C doing the same thing would be better?..


Not at all.

But when you come across 500 KLOC of C, then you say "wow this is big" and you're forewarned that this is going to be unwieldy. You may underestimate the 100 KLOC of Python. That's all I'm saying.


Why only those two options? It could be just "the company had other priorities and developers, while they appreciated Travis's work, didn't find it worthwhile to carry the torch themselves". Having a great testing framework is just one of the things that need devs' attention.


Fair point, it's a false dichotomy; though your example is close to "the org wasn't ready for it," which is itself vague.

Maybe the issue is that it easily comes apart without a dedicated censor, so to speak, and nobody wanted to have that role.


> Did it work well longterm? Unfortunately, no. He stayed relatively short (<2y). His framework deteriorated under the new ownership.

I think the issue is that integ tests are not really a place that sees "development". You write the integ tests, and move on, you are not actively introducing more integ tests.

I think that's a shame that this is the common view of the people that need to fund this. Bad integ tests mean bad dev exp, which then results in increased attrition and dissatisfaction.


> With this in mind insurance is a service worth paying for as long as the fee is lower than utility you gain from it.

Why even consider the insurance as investment (money increasing tool)? A stock market would get you higher gains at more controlled risk.

The 3rd best financial advice I got is about insurances: "Pay the insurance only if the negative outcome would cause you a significant financial loss" (and is of relatively high probability)

So, insuring a house from fire etc. makes sense. But, my $2k bike is not worth covering (from my PoV). Or, if we go to extreme, my (unnecessary) motorbike/boat/jet ski as if they are destroyed, I can continue living without them (a bit of exaggerated example, but I hope you get my point). Same for insuring a house from unlikely events.


Disagree on the probability bit. The important part is the severity of the negative outcome.

And if you can't afford to lose the toy you couldn't afford to buy it in the first place. Thus toys should never be insured.

(That is, of course, assuming they aren't mispricing it. I've seen a situation like that where I considered it: There are companies that offer small-appliance warranties at approximately an x% of price model--reliability doesn't enter into it. If you know that with your use case the product is likely to fail within the warranty... But some searching shows the real business model is to make it nearly impossible to actually make your claim.)


> And if you can't afford to lose the toy you couldn't afford to buy it in the first place. Thus toys should never be insured.

You’re getting this wrong.

Being unable to afford to lose a toy doesn’t mean you weren’t able to afford it, it means you weren’t able to afford buying it twice.

It works the same way with home insurance. I can afford the house. I can’t afford two houses if my current house burns down and I need to buy another one.


> Why even consider the insurance as investment (money increasing tool)?

Most people wouldn't consider it in that way - unless the risk they're insuring is terribly mispriced or they plan to do some insurance fraud.


Tesseract out of the box is terrible for anything non standard. I tried using it for the comic books. Unusable. The training for your font is doable, but it's very time intensive (while the tools are pretty good!).


Even though I'm finding this AI hype ridiculous, some of the screenshots look fake (self harm related, profanity, etc.). The filtering at that level is solved ages ago.

Are the answers stable/reproducible? (I'm not in a location where this is launched)

But the absurdity of most of the answers is believable. I'm still perplexed at how much Big Tech is betting on this broken (and absurdly expensive) technology.


Similar has been reproducible on my end, it reminds me of gpt2; they are pulling from things like Reddit threads and articles from the onion


What exactly do you fin ridiculous about ai?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: