Bias finds its way into every corner. And no, it's not safe unless you do some extensive analysis of the certificate in question. Is this actually a valid/genuine certificate, besides the timestamp? A normal user will have a hard time verifying that.
You don't need to know much about security to write safe programs. Most of the work is done by others. You need to know how to avoid common pitfalls (i.e. yeah, if your service uses 3rd party code to parse customer provided data and your colleague suggests using this awesome C++ library for it, maybe some alarm bells should go off).
Advanced security knowledge is not needed for developing software. What you need is a security team that reviews the work your developers produce. That's it.
> bad algo can almost always be rewritten, leak cannot be reverted.
And while a bad algo can be rewritten, bad software often can not. Bad programmers are a disaster for scalable software projects. A leak can not be reverted, but so can't the product you didn't ship because you hired the wrong people. Or the company who goes bankrupt because you weren't able to ship a product.
Building software with security engineers instead of software engineers is like trying to win a Formula One race with Fighter Jet pilots.
You also don't need to know much about many algorithms to write workaday programs. Most of the work is done by others. /head nod to Timsort.
You need to know how to avoid common pitfalls (i.e., yeah, maybe nested for-loops with millions of elements per level of iteration, maybe some alarm bells should go off)
Advanced algorithm knowledge is not needed for developing typical business software.
Bad security is a disaster for any public-facing software projects. A product you didn't ship because you hired the wrong people can't be reverted, but neither can a data leak.
Or the company who goes bankrupt because you leaked highly sensitive data.
That doesn't make any sense whatsoever. You do not hire someone because he knows about a certain technology. You hire people because they will provide long term value to your company and are able to adapt a rapidly changing technology space.
Reversing a binary tree on a whiteboard is certainly a bad question to ask. But I would argue, for all intends and purposes still a miles better indicator about future potential than if someone knows how/why TLS work. Yeah you can read that in a book. I can google it. Useless for interviews.
If you are hiring for a position that requires you to implement TLS, sure go for it. But that is not the rule. And what are you going to do after he has implemented TLS? Will he be able to work on something completely different?
If you hire based on algorithms, there's a good chance you'll end up with a bunch of people who are good at algorithms, or at least willing to put in the effort to study the algorithms and be able to recite them back with some variance.
If you hire based on knowing how the the internet works (TLS, HTTP, BGP, whatever), then you'll be working with a bunch of people that understand how the internet works.
I guess the idea is that TLS is sufficiently complicated that you can take tangents during the interview and establish if the candidate can understand and communicate complex concepts
When I'm looking to hire a software developer, I would personally would rather hire someone who can write well-structured and maintainable code, can describe a system architecture they worked on in a way that is accessible to people not familiar with the domain space, and is able to say "I don't know that, but I know how to Google it".
Now, if I'm hiring a sysop / devop / security engineer, it's going to be differently focused to some extent, but the same principles apply - core knowledge, communication, humility, ability to research.
These are not even close to equivalent. I agree the algos interview process is broken in many ways and can be gamed to some degree by grinding Leetcode or whatever, but implementing an algorithm is generally much more difficult than recalling a fact.
(Admittedly, candidates are likely to have memorized the algorithm for reversing a binary tree since that is such a common interview question)
You're testing recall only when asking about TLS. You may be testing recall for "how to implement a known algorithm", but there's still plenty of room for testing actual problem solving too.
If you are the interviewer, even if you are asking a standard "how to invert a binary tree" type algorithm question, you hold the cards to keep pushing the bounds for problem solving by extending the question.
Well if you're willing to push boundries, you can certainly do the same with TLS. Ask why the TLS spec does X instead of Y, for various subtle design decisions. This is probably actually much easier to do than with a binary tree, as TLS is a lot more complex and a lot more subtle. It would certainly be an appropriate line of questioning if you were hiring a crypto engineer, not sure it would be relavent to a security engineer or software engineer.
You can ask algorithmic questions that are not easily googleable or standard problems (basically where they have to invent the solution themselves). Although it takes some effort to find the right difficulty problems (tip: the right difficulty is pretty low) so you don't end up wasting time on stuff nobody can solve or stuff everyone easily solves. You can ask people you know to take a stab at a problem to gauge its difficulty. Or you may even be able to eyeball it.
Is fast really the goal? Or is it just that most engineers don't want to spend time interviewing so that's what we've optimized for?
Some of the absolute best engineers that I've worked with take their time to wrap their heads completely around a problem before diving in. They aren't slow thinkers, but they aren't people who excel at these kinds of interviews either.
I've been doing interviews for a long time now and I find it more effective to surface strong opinions about things they've worked on -- good and bad.
I'm not hiring into a feature factory -- I don't care about fast cogs. I'm hiring people who care about what they do and giving them an environment to thrive in.
I guess it depends but at the internship level an intern whos gast vs an intern whos slow is like 10x things done dometimes. I work in a math heavy environment tho.. so i can understand your point of view
I see yours also, but we definitely shouldn't be skewing hiring process to interns unless at your company you're mostly hiring interns (I'd think that's uncommon?).
Fixing an issue around being unable to pay with a credit card is likely anything but a 30 min issue, unless there is some triviality happening, like a config issue. But even then you need to add tests for this. A day or two is the absolute minimum and that would be if this is just a config issue. If it's not, then think of weeks.
From the issue as described, the value "visa" worked, but "visadankort" didn't. Denmark's earlier, national debit card system is called DanKort. Nowadays, almost all DanKort cards are issued through Visa, so they are "Visa DanKort" -- within Denmark, there's the option to bypass some sort of fee for a lower cost of processing.
Presumably the value "visadankort" is there to distinguish these cards. However, they can be processed exactly as normal Visa cards.
This is not unique to Denmark, there are a list of card prefixes like this (see 4571 for DanKort[1]), but perhaps the different processing fees in unusual.
> Two decades ago, it was widely argued that dynamic programming languages were more productive because you didn't have to spend time dealing with type signatures. The only reason, then, to use a statically typed language, was for better performance.
This boggles the mind. I am using typed languages since I can think. I never once recall an instant where I was saying "Uh uh, this type signature is driving me craaazy". Like seriously, I don't even know what articles like this are talking about. Types are not your enemy, they surface issues early. Yeah you can whip something up in Python and JS, but ultimately you DO have to deal with types, except now you don't have a compiler doing this job anymore, you have to do it yourself... Somehow.
The only thing typed languages need is something like `dynamic` from C#, which automatically boilerplates untyped access with reflection, without cluttering your code. I.e. duck typing is one of the things some languages like Java need to get better at. But the situations in which I yearn for this are far and few between.
You don't think about types, unless you are new to typed languages... It's really that simple. I never have to think about types. Perhaps its subconcious, but its definitely not slowing me down, its making things faster through robust refactoring, auto-complete and welll doh: TYPE SAFETY!
Languages like Java (which was THE statically typed language around that time) were extremely verbose and inexpressive. It's hard to write anything in Java without metric tons of boilerplate and repeating the type of every value over and over and over again.
On top of it, with OOP languages it's very easy to box the code in deep layers of ridiculous inheritance taxonomies, making it pretty much impossible to refactor anything after business requirements changed. Barely any escape hatches and absolutely no flexibility.
And type-safety is kind of pointless when everything is of a given type "OR NULL".
I was always proponent of typing but after working with Java, I can totally see why so many people back in the day considered static typing as not worth it.
Statically typed languages without Sum Types (aka tagged unions aka enums) which includes Java, C# and C++ amongst others drive me crazy: they have no ergonomic way to express "or" types. This is a massive expressive hole which (along with lacking type inference) I believe is responsible for a lot of the hate towards statically typed languages.
Some of my coworkers complain about being required to use TS instead of JS, and I just wonder why in the world you would want to use JS in a massive codebase.
It is, although the syntax for proper sum types (it calls them "discriminated unions") is really verbose in Typescript. I wish they'd make it more terse so people didn't use untagged sum types and hacks like `typeof` all the time.
Sealed classes in Kotlin largely solve that problem for me, and Java is getting those. It's not quite the same, but most of the time I find that if I'm trying to do Int|String, it's primitive obsession and there is actually a better sealed type hierarchy I'm missing.
If you care about the exact underlying memory layout such high-level types are way too blackbox-y. I doubt that historically this specific feature was responsible for any "static typing hate" (because languages with such high-level type systems were quite obscure 10 or 20 years ago). I have the impression that this hate was specifically a web-dev thing because many Javascript developers never experienced what it's like to work with a statically typed language until Dart and Typescript showed up (and then it suddenly was the best thing since sliced bread).
> Statically typed languages without Sum Types (aka tagged unions aka enums) which includes Java, C# and C++ amongst others drive me crazy: they have no ergonomic way to express "or" types. This is a massive expressive hole which (along with lacking type inference) I believe is responsible for a lot of the hate towards statically typed languages.
We are talking about Python and JavaScript from the top of this comment chain, and the 'user experience' of writing Java/C# is closer to Python and JavaScript than to, say, Haskell.
Why? Rust's type system is basically a more sophisticated version of Java's, JS is in the opposite direction - a much simpler dynamic type system. Rust's lifetimes and borrow checker is additional complexity that JS doesn't have. Rust has longer compilation times than JS, longer than Java. Etc.
>Why? Rust's type system is basically a more sophisticated version of Java's, JS is in the opposite direction - a much simpler dynamic type system.
I would argue that a sophisticated type system is closer to dynamic typing than a simple type system. A type system is like guard rails that prevent you from doing certain things. A sophisticated type system gives you more freedom and possibilities than a simple type system, and hence is closer to a dynamic type system without the guard rails at all.
Java gives you a decent way to build guard rails, Rust gives you a somewhat better way, JS doesn't give you any way at all.
A more sophisticated type system gives you more elegant ways of expressing constraints, with increased language complexity. Java falls in the middle with a fairly simple language and type system. Any code which you do not know how to structure within Java's type system can be written with some Objects/Maps/Collections and a bit of runtime logic - basically giving you what you'd do in JavaScript, though Java's type system is sufficiently powerful for 99% of real-world use cases.
The main issue I see Python and JS programmers face when coming to a statically typed language like Java is the additional complexity of a type system. Saying that a more complex type system would somehow be more familiar is just backwards.
> Any code which you do not know how to structure within Java's type system can be written with some Objects/Maps/Collections and a bit of runtime logic
Yeah, but that means jumping through hoops because of the lack of the typesystem. And that's what many people don't like, hence the whole thread. For you that might be fine, but please understand that there are other people out that who are not okay with it so easily.
> Java's type system is sufficiently powerful for 99% of real-world use cases
Rather the opposite. Every big project that uses reflection/introspection or annotations or some kind of code generation tooling shows that the typesystem is not sufficient. Yeah, there are some cases where the above techniques were used and could have been avoided (while keeping typesafety), but often they are just required.
And then Java does not even have proper sumtypes or union types (enums only work when the structure is identical and I mean... we could count some strange workarounds with static classes and private constructors that pretty much no one uses due to horrible ergonomics). And these literally appear everywhere.
> Any code which you do not know how to structure within Java's type system can be written with some Objects/Maps/Collections and a bit of runtime logic
Yeah, but that means jumping through hoops because of the lack of the typesystem. And that's what many people don't like, hence the whole thread. For you that might be fine, but please understand that there are other people out that who are not okay with it so easily.
Not jumping through hoops... the point is that you can write untyped code in Java similar to JS with similar complexity. If you really think JS is somehow better in this regard then writing horrible poorly-typed Java is not very different.
> Rather the opposite.
The opposite as in Java is arguably the most relied-upon language for enterprise-grade backend code, because it lacks 99% of features people would want? Okay.
> Every big project that uses reflection/introspection or annotations or some kind of code generation tooling shows that the typesystem is not sufficient.
Annotations and reflection are a feature of Java, they are not external to the language. Annotation and code generation are separate features from the type system - Rust's code generation and annotations are very commonly used. Reflection is equivalent to runtime typechecking that is common in JS. How can you say Java is worse then JS in this regard when the poor parts you point out are basically what JS does?
> And then Java does not even have proper sumtypes or union types (enums only work when the structure is identical and I mean... we could count some strange workarounds with static classes and private constructors that pretty much no one uses due to horrible ergonomics). And these literally appear everywhere.
First of all JS and Python do not have these either, so saying that Java is somehow worse in this regard is ridiculous. Furthermore the usefulness of sum types is fairly limited - what problem are you trying to solve with sum types in Java? Implementing an Either<A,B> is trivial in Java.
I think that applies to you a lot more. You're criticising Java for giving you a whole bunch of features which don't exist in Python or JS, while also saying the language sucks compared to Python/JS because it doesn't have features of Haskell. The fact that you think JS is somehow closer to Rust than Java makes me think you have very limited experience with these languages.
> Not jumping through hoops... the point is that you can write untyped code in Java
Sure, but that already means you are jumping through hoops.
> > > Java's type system is sufficiently powerful for 99% of real-world use cases
> > Every big project that uses reflection/introspection or annotations or some kind of code generation tooling shows that the typesystem is not sufficient.
> Annotations and reflection are a feature of Java, they are not external to the language. Annotation and code generation are separate features from the type system - Rust's code generation and annotations are very commonly used. Reflection is equivalent to runtime typechecking that is common in JS. How can you say Java is worse then JS in this regard when the poor parts you point out are basically what JS does?
Well, maybe I misunderstood you. And with "sufficiently powerful" you just meant "someone can kinda use it to build something". Well then, yes. Saying it just doesn't make much sense to me in a discussion about ergonomics where people complain about type system limitations.
> Implementing an Either<A,B> is trivial in Java.
Okay, let me copy&paste how this can be defined in F#:
type Result<'TSuccess,'TFailure> =
| Success of 'TSuccess
| Failure of 'TFailure
or maybe a language closer to Java, here it is in Scala3:
enum Either[A, B] {
case Left[A](a: A) extends Either[A, Nothing]
case Right[B](b: B) extends Either[Nothing, B]
}
I'm curious to see the "trivial" implementation in Java that equals the ones from F# and Scala. Mind that both solutions I gave allow to add a "fold(left -> handleLeft(...), right -> handleRight(...))" function which allows to manipulate the content, depending on what it is _without using any casts or reflection_. This is possible in Java, but I don't know any "trivial" solution.
> Sure, but that already means you are jumping through hoops.
Map<String,Object>, Object, a few other things you may need are standard Java, so not sure how this is 'jumping through hoops'. It's not necessarily more complicated, just not idiomatic java - the point is you CAN write shitty JS-style code if you want, how is that an argument for why JS is somehow better than Java?
> Well, maybe I misunderstood you. And with "sufficiently powerful" you just meant "someone can kinda use it to build something". Well then, yes.
Are you not aware that many prominent tech companies have a significant Java stack? Google, Amazon, Uber, Airbnb, Netflix, etc? Are you not aware of major open source Java projects such as kafka, elasticsearch, hadoop, android sdk, etc? What point are you even trying to make?
> Saying it just doesn't make much sense to me in a discussion about ergonomics where people complain about type system limitations.
What doesn't make sense is saying that Java's type system makes it a worse language than JS or Python, or that JS or Python are closer to Rust/Haskell.
> I'm curious to see the "trivial" implementation in Java that equals the ones from F# and Scala. Mind that both solutions I gave allow to add a "fold(left -> handleLeft(...), right -> handleRight(...))"
Here you go:
class Either<L,R>
{
public static <L,R> Either<L,R> left(L value) {
return new Either<>(Optional.of(value), Optional.empty());
}
public static <L,R> Either<L,R> right(R value) {
return new Either<>(Optional.empty(), Optional.of(value));
}
private final Optional<L> left;
private final Optional<R> right;
private Either(Optional<L> l, Optional<R> r) {
left=l;
right=r;
}
}
Yes, it's longer and slightly more complicated, mainly because Java doesn't have pattern matching, and yes you can add typesafe fold and map functions to it without reflection. That being said, you gave me examples in languages with more sophisticated type systems than Java - these say absolutely nothing about why Java is worse than Python or JS.
> Map<String,Object>, Object, a few other things you may need are standard Java, so not sure how this is 'jumping through hoops'
When you put various things into this map and then later get them out and want to work with them, you will have to cast them to be able to do anything useful with them.
> the point is you CAN write shitty JS-style code if you want, how is that an argument for why JS is somehow better than Java
For the sake of our discussion: I have never said that JS were somehow better than Java. I much prefer statical type systems and would always pick Java over JS for a personal non-browser projects. But that's not the point of this discussion, so I'm playing "devil's advocate" here. It's important to understand and accept the shortcomings of statical type-systems - that's what I try to explain here.
> What doesn't make sense is saying that Java's type system makes it a worse language than JS or Python
You need to re-read what I (and the others in this subtread) have written. It is completely valid to criticize one part of language X compared to language Y without implying that this language X is worse than another language Y overall.
> [Java Either implementation]
> Yes, it's longer and slightly more complicated
And not only that, it is also not equivalent to the F# / Java examples. Or if it tries to be equivalent, it is buggy.
E.g.:
Either.left(null)
Now I have an Either that is neither left nor right. Compared to the Scala example (because Scala also has to deal with the existence of null):
Left(null)
This will create an instance of the type Left which contains a null-value. As I said, if I add a `.fold` method, then it will fold over the null. E.g.:
Left(null).fold(left => "Left value is " + left, right => "Right value is" + right)
This would return the String "Left value is null". You can't do this with your example implementation in Java, because the information is lost.
It is _not_ trivial to do that in Java, even when relying on already similar functionality like the built-in Optional type.
> When you put various things into this map and then later get them out and want to work with them, you will have to cast them to be able to do anything useful with them.
Considering this is entire hypothetical is a edge case, that's a minor inconvenience.
> But that's not the point of this discussion, so I'm playing "devil's advocate" here. It's important to understand and accept the shortcomings of statical type-systems - that's what I try to explain here.
That is the point of the discussion, the original claim I was objecting to was:
'This is a massive expressive hole which (along with lacking type inference) I believe is responsible for a lot of the hate towards statically typed languages.'
You're pointing out weaknesses in a subset of statically typed languages, and these are only weaknesses when compared to better type systems - not when compared to dynamically typed languages. I never claimed that Java had a perfect type system - I prefer Haskell and Rust.
> You need to re-read what I (and the others in this subtread) have written. It is completely valid to criticize one part of language X compared to language Y without implying that this language X is worse than another language Y overall.
It's not valid when you're using Rust or Haskell to show weaknesses in Java relative to JS. The original context was Java/C#/C++ vs Python/JS.
> Now I have an Either that is neither left nor right.
You're right. Here's a simple example without this problem:
class Either<L,R>
{
public static <L,R> Either<L,R> left(L value) {
return new Either<>(value, null, true);
}
public static <L,R> Either<L,R> right(R value) {
return new Either<>(null, value, false);
}
private final L left;
private final R right;
private final boolean isLeft;
private Either(L l, R r, boolean isLeft) {
left = l;
right = r;
isLeft = isLeft;
}
}
> It is _not_ trivial to do that in Java, even when relying on already similar functionality like the built-in Optional type.
It is trivial, it's just more awkward and lengthy but not complex at all - also no Optional. Plus there are stable libraries providing types like Either<A,B>, and other functional language features. Anyway, I'm not here to defend Java type system against Haskell, my point is that Java type system is a huge feature when compared to JS or Python.
> Considering this is entire hypothetical is a edge case, that's a minor inconvenience.
I believe this is not an edgecase. I have to deal with that almost everyday and I'm working with a language that has a much more advanced typesystem than Java. But I guess there is no hard data for that, so everyone can believe what they what. :)
> You're right. Here's a simple example without this problem:
If it's so trivial, then why do you even have to fix something in your first approach. Also, you second approach still has flaws and is not equivalent. Maybe you want to figure it out yourself this time? :)
Anyways, I guess we are talking different points. Have a nice day!
First of all, of course Rust hast some additional complexity because it is close to bare metal. But if you think this complexity away (to make it comparable to e.g. javascript), here are some reasons:
1) Better type-inference. In Java this has improved but is still much more clunky and boilerplatey. Good type-inference is important to not annoy the user.
2) Traits / type-classes. They enable a way of programming that comes much closer to duck-typing and avoid wrapping your objects in wrapper-classes to support interfaces like you are forced to do it in Java.
3) Better and less noisy error handling (looking at you Java, Go, C++ and most other languages)
You can't 'think away' the additional complexity of lifetimes and the borrow checker though. It's something you have to understand and keep in mind.
> Better type-inference. In Java this has improved but is still much more clunky and boilerplatey. Good type-inference is important to not annoy the user.
In my opinion, Java without type inference is fine - it's very minor issue, and there is a fairly limited scope of code that would actually benefit from type inference in terms of quality/readability. If you use a decent editor most of the redundant typing is auto-completed anyway.
> Traits / type-classes. They enable a way of programming that comes much closer to duck-typing and avoid wrapping your objects in wrapper-classes to support interfaces like you are forced to do it in Java.
Eh, Rust traits are better than Java's interfaces, but you can implement multiple interfaces for your own objects in Java without any wrappers. The issue is extending external objects to support new interfaces. Plus, the point is to have correct code defined and checked at interface/trait boundaries, something JS doesn't do at all.
> Better and less noisy error handling (looking at you Java, Go, C++ and most other languages)
The error messages for Rust can be much more complex than Java, and are probably more complex on average, simply because it's a more complex language and type system.
I would say the benefits of Java's type system far outweigh the imperfections and tiny costs when compared to a language like JS.
> You can't 'think away' the additional complexity of lifetimes and the borrow checker though
Of course you can't. But the problem of lifetimes does not go away, not matter if you use statical or dynamic typing. However, in javascript this problem does not exist, so obviously that can't be compared to Rust. If you use Rust, then because you _need_ this for performance.
> In my opinion, Java without type inference is fine
Fair enough, but most people see that very different, hence the unhappiness.
> Eh, Rust traits are better than Java's interfaces, but you can implement multiple interfaces for your own objects in Java without any wrappers. The issue is extending external objects to support new interfaces.
That's exactly what I said or at least meant. :)
> The error messages for Rust can be much more complex than Java
No no, not the error messages that the rust compiler gives you. I'm talking about error handling that the developer does.
> I would say the benefits of Java's type system far outweigh the imperfections and tiny costs when compared to a language like JS.
I agree, but just because the benefits outweight the problems, that doesn't mean people will be frustrated by these problematic parts. And let's not call it imperfections. Java is _so_ far away from perfection, that just gives your post a sarcastic touch.
> However, in javascript this problem does not exist, so obviously that can't be compared to Rust.
Static typing and a whole bunch of other things don't exist in JS as well. The point of a comparison is to highlight the differences and similarities. You were the one who said Rust is more similar to JS than Java, you can't just reduce the language to 'type inference and traits' - things JS doesn't have at all and say it's somehow similar to JS.
> Fair enough, but most people see that very different, hence the unhappiness.
You have data supporting this 'most people see it...' argument? Or you just made it up on the spot?
> No no, not the error messages that the rust compiler gives you. I'm talking about error handling that the developer does.
Error handling in Java is very straight forward and it's far more similar to JS than Rust is.
> And let's not call it imperfections. Java is _so_ far away from perfection, that just gives your post a sarcastic touch.
Java is a great language. It's not overly complex, it's fast, it has great tooling and IDE support, it has one of the largest library ecosystems. It has a huge developer community, many high profile projects, many high profile companies use it. It's easy to find decent Java developers for your project. It has a decent type system - far better than JS or python. From a pragmatic point of view Java is one of the best languages in existence.
Not a C++ developer, but from what I read, it says: "A variant is not permitted to hold references, arrays, or the type void". These are quite some limitations and don't really give a "smooth" experience.
As for void, apparently the reasons I had in my head are not the reasons in real life. I thought it might be because of a destructible requirement on the type, but it turns out there really isn't a good reason why they disallowed it, and that a future standard might allow it.
In any event, there are a multitude of variant implementations that allow all sorts of things depending on the behavior you want. Nothing is forcing you to use the standard library.
I just wonder why it is hard to make a variant-type that works with everything. Well, if the language prevents e.g. reference rebinding, there can't be done much.
But not being able to put _anything_ into a variant severely limits the way it can be used for abstraction. Especially for library authors, because they might not no what their users will pass them. So when they write a generic method, that uses variants under the hood, they would have to "pass down" the restrictions to the user and tell them not to pass e.g. void. Same for the interaction of two libraries.
> I just wonder why it is hard to make a variant-type that works with everything.
Because standard C++ has to work for the general case. It's specifically designed so that more pointed or specific implementations that have other concerns (e.g. supporting C-style arrays or void) can do so, accepting the runtime penalties if desired.
> reference rebinding, there can't be done much.
References are syntactic sugar over pointers at worst, and a means of optimization at best. C++ is a pass-by-value language first and foremost, and goes to great lengths to keep things as optimizable as possible when it comes to standardization.
Again, variant supports pointers just fine. It also supports smart-pointers just fine. There's nothing preventing you from using those.
Remember that C++ has to work across all architectures, platforms, etc. Not everything handles references the exact same. Compilers are afforded many liberties when it comes to them in order to optimize for the target machine.
> But not being able to put _anything_ into a variant severely limits the way it can be used for abstraction.
Aside from `void`, I disagree. Like I said before, you can implement your own variant quite easily if you want those things. There are decent reasons (except for `void`) not to include them in the standard.
> Especially for library authors, because they might not no what their users will pass them.
I'm not so sure I understand what you mean here. Templates tell library authors /exactly/ what will be passed to them.
> So when they write a generic method, that uses variants under the hood, they would have to "pass down" the restrictions to the user and tell them not to pass e.g. void.
They don't have to tell the user anything. The compiler will inform them void is not allowed if a type cannot be compiled.
> Or am I misunderstanding the constraints here?
Yes. Most of what std::variant does in terms of type checking happens at compile time. Unless a program has been modified after compilation (which should never be the case), there's no possible way for the "wrong type" to be passed to a variant at runtime, because the assortment of possible types have been checked at compile time.
---
EDIT: I just realized why `void` may not be included, though I admit it's speculation.
`void` is not allowed as a function argument type; it is not equivalent to e.g. `decltype(nullptr)` and simply is the lack of type.
Therefore, there is no valid specialization of `operator=(const T&)` that would accept a "void type" because there is no way to implicitly invoke `operator=(void)` (you'd have to call, literally, `some_variant_object.operator=()`, which is very un-C++).
The alternative would be to have a method akin to `.set_void()`, and it could only be enabled if `void` was one of the types passed to the template parameter pack - and, if it is the only type passed to the parameter pack, all other `operator=()` overloads would have to be disabled.
This is an incredibly confusing API specification that I can understand if never included in the standard.
Note that, in this case, there'd be a difference between "no value" (null) and a "void value" (not-null), which is overly confusing and, again, very un-C++ (or un-C for that matter).
If this is the rationale, it makes a lot of sense and I agree with it. If I need a variant that supports `void`, I'd probably write my own anyway because there's probably a domain-specific use case.
>This boggles the mind. I am using typed languages since I can think. I never once recall an instant where I was saying "Uh uh, this type signature is driving me craaazy".
You might not, but this was a common sentiment (not saying it is necessarily a valid one, mind you, but it was common). Were you programming 2 decades ago and/or paying attention to the average sentimeντ expressed in blogs/etc re types and dynamic languages (and the general tone up to around 2012 or so even in HN)?
Another common sentiment was that "who needs types when you have TDD".
The people who have problems with types and think removing them makes programming easier are the same people who have problems with syntax and think that replacing text with some kind of graphical representation makes programming "easy for non-programmers".
> I never once recall an instant where I was saying "Uh uh, this type signature is driving me craaazy".
Is that sentiment based on modern languages, though?
While modern in its place in history, but adhering to older principles, I frequently hear exactly that from people evaluating Go. Languages with more complex type systems bring tools to help alleviate those concerns. Not all of those concepts were widely available looking back two decades ago. Java, for example, which was probably the most popular typed language of that time did not provide even generics until 2004. Being able to write a function once and use it for many (dynamic) types was no doubt seen as a big improvement for many use cases.
Type systems are back in fashion now largely because they are much more usable now, especially outside of the academic languages.
> This boggles the mind. I am using typed languages since I can think. I never once recall an instant where I was saying "Uh uh, this type signature is driving me craaazy". Like seriously, I don't even know what articles like this are talking about.
This was OVERWHELMINGLY the sentiment on hacker news a decade ago. Strongly statically typed languages were NOT WELCOME on this website.
A lot of that can probably be attributed to major advances in type system ergonomics. A verbose, clunky type system with poor inference can easily be worse than none at all. Ten years ago, there just weren't that many popular, practical languages with really good type systems. Now there are quite a few.
Statistical analysis is a domain I am more than happy not to care about, it was already enough what I had to bare with during my engineering degree.
Still, given that Python and R are just glue languages for C, C++ and Fortran libraries, I rather use the source directly, or bindings to typed languages.
Modern C++, .NET, Java or ML based languages are just as effective,.
And here is a fun fact, I have spent 4 years working for life sciences companies, where several researchers I got to know, would do statistical analysis in Excel + VBA, eventually using VB.NET as well for more complicated stuff.
Where dynamic typing helps a lot is for creating frameworks, like Django, RoR, etc.
Because the framework is working on a level above the application, it can easily deal with objects without worrying about what is inside them.
However, people got caught up in this no-type nonsense and took it to all corners of every app development.
When you are creating, say web apps, you may not create types in dynamic languages like python, but you sure as hell will use known variables in those types.
There are a few edge cases, where dynamic types allow you to build logic on user-defined sets of data, but those cases are few and far between. Even those can be solved using generic data containers or custom data protocols such as XML.
In Python and Ruby it's an extremely common pattern to return a dictionary with a half-dozen entries at most that will be consumed at a single location.
Defining an entire class for this sort of extremely common use case is for the most part a waste of time.
These languages allow for the formalization of those types by creating classes out of them, but looking at what 50% of my functions do in web dev code, they're returning tuples, small dictionaries, or standard library objects.
Language features are a secondary consideration when compared to ecosystem and library availability as far as getting actual stuff done.
I never claimed those languages don't exist, but a language is a full package, and right now I'm not seeing even ascending languages as supporting trivial structural return types as an idiom. Granted, this is mostly a personal pet peeve given my observations and usage of code.
Me:
> Even for that there is no need for dynamic typing anymore. This problem has been solves with type parameters (aka generics) and type-classes.
This is obviously a general statement. I means "there can be a programming language where there is no need for dynamic typing to solve this kind of problem". And then I continue, that this problem has been solved. That means there is at least one such a language already existing which solves this problem with certain techniques.
Then you:
> Not true.
And once I offer you a concrete implementation as an example, you suddenly change the topic to "but... no mainstream language". And if I would present you a language that could be considered as mainstream, I'm sure you would find another restriction such as "but this language does not have enough... libraries".
> Types are not your enemy, they surface issues early.
Not only that, but they document the code.
When working with a new framework or library in Python or JavaScript I never know what I can do, I have to look at the documentation constantly.
Not seldom I'm still left scratching my head or doing stuff like "print(dir(result))" to figure out what I can do with whatever that function returned.
With a static typed language I can see what type the function expects and what it returns. If I don't know a type I can discover what it can do in a few clicks in my IDE.
Statically typed languages only eliminate SOME things you'd otherwise have to test. In the end, you can eliminate almost anything besides logical errors, however, those are unfortunately a pretty big portion of bugs :D.
So while I would always use statically typed languages for anything that needs to be reliable, I do not see how this is in any way a necessity. You CAN write reliable programs without type safety, you just have to test things you normally wouldn't have to test (i.e. the lack of type safety introduces a whole bunch of null-pointer style scenarios where you get something your code totally didn't expect but has to deal with).
As for performance. Statically typed languages are usually faster, mostly because we do not have the technology yet to make dynamically typed ones as fast (in the general case). Not because there is something inherently different about them.
However, I imagine the technology to make them on par with statically typed languages will take another few decades. Mainly because untyped languages need sophisticated transformations to be fast. That is the job the human normally does for the compiler in typed languages. Things just fit together and play nicely. With dynamic languages, your get one big spaghetti soup with no structure and now the compiler has to figure out the "intended" types and compile for different estimated function signatures, etc. all while honoring the JIT-style performance considerations (fast startup, low overhead during runtime). This is a gargantuan task that probably will require advanced machine learning before it really takes off.
> I do not see how this is in any way a necessity.
Right, but that's not my point. My point is that using a static type language isn't JUST about performance. If I define foo as a string and call it as an integer in my program, my compiler is going to catch it, whereas in a dynamically type language I may not discover the bug until it hits production.
Statically typed languages typically have better performance.
In C, i++ is one or two machine instuctions. In javascript, we don’t know if i is an int, or something that overrode ++ to download wikipedia. So, it ends up being a function call in naive JavaScript. Fast forward a decade, and dozens of PhD theses mean that ++ is usually a machine instruction in JavaScript, but it is not guaranteed.
Two decades ago, dynamic languages were more typically scripted / interpreted, and statically typed languages were more typically compiled. Scripting does allow for quicker iteration (especially if compared to a language with a slow compiler) and compilation usually does produce faster code (at least pre-dating sophisticated JIT compilers).
A dynamic language is not just a statically-typed language where all type labels are removed -- it is the wrong mindset -- don't try to write Java programs using Python syntax.
On type safety: Python is a strongly typed language unlike e.g. C:
>>> "%d+%d=%d" % (2, '2', 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: %d format: a number is required, not str
Funny, that's one of the things that drives me nuts about Python. It knows each types, it knows function to convert to each type, but... it doesn't do the conversions! So frustrating! Why do I have to care about types in these dynamically typed languages??
Also, is sprintf-style string formatting the best example here? I think that feature is type strict in a lot of languages, after all you are declaring the types you want in the formatting string. I imagine most implementations of % in dynamic languages pass to sprintf internally?
>It knows each types, it knows function to convert to each type, but... it doesn't do the conversions! So frustrating!
It surprises me that you are incapable of thinking one step ahead. Implicit type conversions undermine the type system and make your language completely unpredictable. Pretty much everyone regrets this feature in C++ and it is still a huge source of bugs because you have to opt out of it.
You want things to fail with a loud bang instead of continuing and destroying things along the way.
> It surprises me that you are incapable of thinking one step ahead.
Seriously, what is this? You don't know me. Lay the fuck off.
My thoughts on the matter are either that a language should be strongly typed, with type declarations and enforcement at the compiler level, or dynamically typed in such a way that I shouldn't have to worry about types except for specific circumstances. Dynamically typed languages should know their coercion capabilities and perform them losslessly when needed. In reality what ends up happening is that some type coercions are automatic and some aren't, and if you're a polyglot then this is yet another one of those stupid arbitrary details you have to memorize for each language you work with. (I really would like less of those, there are too many different languages for doing the same thing)
One of the benefits of dynamic languages is that they handle type stuff for you. Which I interpret as, "cool, I don't have to worry about types!"
Yeah I know python is all about being obnoxiously explicit, and that is one of the things that I really do not like about that language. Now that I think about it maybe Python should have been strongly typed, its "no implicit behavior" opinion works much better under such a regime
Some of those things in Python reflect its history as a better language between Bash and C, and the gotchas of those languages became verboten in Python.
Elsewhere in eg NumPy conversions between number types are automatic as long as they don’t lose information.
Python is biased towards "explicit is better than implicit". It would rather fail with a TypeError than silently coerce types and risk masking logic/type errors.
> Why do I have to care about types in these dynamically typed languages?
Because Python is strongly typed, and types determined behaviour.
It doesn't matter if you have the types ascribed explicitly or not. In the end, the developer needs to know the types in both cases anyways. Otherwise, if they don't know the types, they will pass in a callable that expects the wrong types (for example they mix up [int, Exception] to [Exception, int]) and now you just have a runtime error.
If anything, having type signatures like that are a good thing! If you think they are too complicated/ugly or driving you crazy, then you have an incentive to improve them. In many languages, callbacks are now considered bad style and that's good! Hiding the type signature does not make the problem go away, you just move it into the future where it will bite you even more.
No type signature? Or documentation in another file somewhere else guaranteed to be out of sync, difficult to find and not enforced by the compiler? No thanks to either of them.
> This boggles the mind. I am using typed languages since I can think.
And therein lies the problem. It's a tradeoff: prototyping speed/readability vs longterm reliability. Modern languages have greatly improved the tradeoff, but it still exists, whether purists believe it or not.
I personally find the highest productivity in adding the majority of tests and typing later, after a design solidifies, not before.
> This is the idea that programs will all talk together via unstructured streams of text.
Curious. To me this is the worst thing you could ever do. Talking via streams to say `cat file.txt | grep ERROR | wc -l` is cool. But you could do SOOO much more, if programs would actually output structured data streams. You could connect standalone applications much in the same way as Visual Scripting, where you plug inputs and outputs together and mix them with operators (think of Unreal Engine's Blueprint, just for command line tooling).
It's a true shame that Linux did not develop a well defined CLI metaformat that defined exactly what parameters are there, what's their documentation, their completion, what outputs does a program produce based on the parameters you provide, etc. You could do true magic with all this information. Right now you kinda still can, but it is very brittle, a lot of work and breaks potentially with each version increment.
I think it stems from the design failure to build your app around a CLI. Instead, you should build your app around an API and generate the CLI for that API. Then all properties of structured data streams and auto-explore CLI shells come for free.
> Curious. To me this is the worst thing you could ever do. Talking via streams to say `cat file.txt | grep ERROR | wc -l` is cool. But you could do SOOO much more, if programs would actually output structured data streams.
A lot of people have had this thought over the decades, but it hasn't really happened -- powershell exists for linux, but who's using it? The genius of the primitive representation (stringly typed tables) is that it has just enough structure to do interesting processing but not enough to cause significant mental overhead in trying to understand, memorize and reference the structure.
Case in point of the difficulties of adding more structure without wrecking immediacy of manipulation is json.
For anything with more than 1 level of nesting, I do stuff like
blah | jq . | grep -C3 ERROR
probably a lot more than I do
blah | jq $SOME_EXPRESSION
because it's just so much less mental overhead -- I don't have to think about indexing into some complex hierarchy and pulling out parts of it.
I'm not saying it's not possible to get out of this local optimum, but it appears to be a lot more subtle than many people seem to think. There may be an simple and elegant solution, but it seems it has so far escaped discovery. Almost five decades later, composing pipelines of weakly structured and typed bytes (that, by convention, often are line separated tables, possibly with tab or space seperated columns) is still the only high-level software re-use via composition success story of the whole computing field.
Very few use Powershell for Linux because it doesn't pre-installed on a Linux box. Otherwise you can bet that people would be using it in large numbers. And yes I would prefer your second "mental overhead" way as it involves less typing. Unfortunately powershell is more verbose than bash not less.
Powershell is unfortunately not the shining example of a shell that best leverages structured/typed input/output succinctly.
But on Windows, sysadmins use powershell heavily. Nearly every IT department that manages windows machines uses Powershell.
> Very few use Powershell for Linux because it doesn't pre-installed on a Linux box
I don't buy that. On a GNU/Linux box, there's few things that are easier than installing a new shell, if you prefer a different shell than bash it's two commands away. Bash does the job people expect it to do and would probably be _very_ alienated it they'd had to start messing around with .net gubbins.
>And yes I would prefer your second "mental overhead" way as it involves less typing
Maybe for the first time you would. Maybe if you were to accomplish this specific thing. Anything else? Have fun diving into the manpage of your shell _and_ the programs you want to use, and you better hope they share a somewhat common approach to the implemented (object) datatype or well, good luck trying to get them to talk with each other
>Powershell is unfortunately not the shining example of a shell that best leverages structured/typed input/output succinctly
I would just remove the last part, then agree with you: ">Powershell is unfortunately not the shining example of a shell"
> Nearly every IT department that manages windows machines uses Powershell
I mean, what other choice do they have there? cmd? Yeah right, if you want to loose your will to live go for it
>I don't buy that. On a GNU/Linux box, there's few things that are easier than installing a new shell, if you prefer a different shell than bash it's two commands away.
When you are SSH'ing into one of 10k containers for a few commands, you will only use what is already there. Bash is there and works and that is what one will use 100% of the time. No one is going to permit Powershell to be bundled to satisfy personal preferences.
You're both moving the goal posts (if powershell were superior I and countless other people would absolutely chsh it for our accounts, since we're already not using bash anyway) and not making much sense. Many sysadmins tend to spend a fair amount of time doing command line stuff and/or writing shell scripts. If powershell offered significant enough benefits for either, of course at least some companies would standardize on it, just like your hypothetical company presumably standardized on using containers to run their 10k services rather than installing some custom k8s cluster to satisfy the whims of one individual infra guy.
When one doesn't have control over the repositories used in build and service machines since they are locked down nor have control over what goes into docker images (only secured images allowed and good luck trying to get your custom tools in), one will use the stuff that is already present.
This is far more common than you think in enterprise corporations. I work at the hypothetical one, which doesn't use k8s. (yet to upgrade cloud infrastructure of native data center)
If power-shell was bundled by default in Linux distro LTS releases, a lot of sysadmins I know would start using it, since they are already familiar with it for windows and write all their scripts in the same.
> And yes I would prefer your second "mental overhead" way as it involves less typing.
1. It doesn't, just use zsh and piping into grep becomes a single character, like so:
alias -g G='|grep -P'
2. Even apart from that I'm a bit sceptical that you can conjure up and type the necessary jq invocation in less time than you can type the fully spelled out grep line.
Not only that but a fixed object format also "forces" me to parse the data in a particular way. Think of representing a table in JSON. The developer of the producer will have to pick either row-major or column-major representation and then that is how all consumers will see it. If that's the wrong representation for my task I will need to do gymnastics to fix that. (Or there needs to be a transposition utility command.)
Obviously JSON is not suited for tabular data, but perhaps another format could be used. Ultimately, the user shouldn't care about JSONs or tabular objects.
IMHO, I need both text streams and metaformat. YMMV.
Just like GUIs should but usually are not gracefully, responsively scaled to user expertise, the developer experience should but usually are not gracefully, responsively scaled to the appropriate level of scaffolding to fit for purpose to the requirements defining the problem space at hand. I need more representations and abstractions, not less.
Metaformats drag in their own logistical long tail that in many use cases are wildly heavyweight for small problems. Demanding metaformats or APIs everywhere and The Only Option trades off against the REPL-like accessibility of lesser scaffolding. API-first comes with its own non-trivial balls of string; version control between caller and callee, argument parsing between versions, impedance mismatch to the kind of generative CLI's you envision, and against other API interfaces, etc.
The current unstructured primitives on the CLI, composable into structured primitives presenting as microservices or similar functions landing into a more DevOps-style landscape, etc. represents a pretty flexible toolbox that helps mitigate some of the risks in Big Design Up Front efforts that structure tends to emerge in my experience. I think of it as REPL-in-the-large.
As I gained experience I've come to appreciate and tolerate the ragged edge uncouthness of real world solutions, and lose a lot of my fanatical puritanism that veered into astronaut architecture.
> But you could do SOOO much more, if programs would actually output structured data streams.
Like even treating code and data the same, and minimizing the syntax required so you're left with clean, parseable code and data. Maybe in some sort of tree, that is abstract. Where have I heard this idea before . . .
> I think it stems from the design failure to build your app around a CLI. Instead, you should build your app around an API and generate the CLI for that API.
Now this I am fully in favor of, and IMHO, it leads to much better code all around: you can then test via the API, build a GUI via the API, etc, etc, etc.
> Woot? So you are buying a new Fiat Punto and compare it to the latest spec of a Koenigsegg? What are you even doing?
under any other circumstance i’d agree with you. i think we can agree this is not the assertion apple are trying to push in their marketing of the M1.
if Fiat are going to claim their entry level punto is, in real world terms, faster that 98% of all cars sold in the last year, they’re inviting a lot of (fair) comparison.
the 1050 is a budget chip from 3 generations ago. even in the GPU space, nvidia are claiming that their mid range GPU (RTX3080) is outpacing their previous generation top-end GPU (RTX2080ti).
But the 1050 is a specialized GPU vs the general-purpose M1, and besides, the 1050 is from only three years ago. So what do you think the relationship between XTX6080 and and M4 will look like three years in the future?
I’d like to see comparisons of Tensorflow-gpu operations. Kind of like how Apple used to compare Photoshop filter or Final Cut performance across computers.
Is there a Tesla that lasted 20 years and after 500 thousand Kms is still functioning with little or no maintenance?
Switching to Apple has a cost, switching to Apple with Apple silicon has an even higher cost.
It all depends what you use your computer for, if you buy a Tesla you probably don't depend on your car, people buying entry level hw are people who don't need something fancy, they need a tool and good enough it's enough.
To reverse your analogy, if they have the same price, I take a computer that I can upgrade and actually own over an Apple
I am pretty sure they did, unless he got hired at some VP/Distinguished Engineer level.
And to be clear: If you are unable to solve these common algorithmic questions that companies like Microsoft ask, then that's not the right place for you to work. This is a tangent, but there are literally thousands of companies that won't require you to solve these problems. The thing is, at Microsoft & co. you don't just do this in an interview. You do it at your job too. We do foundational work in many teams and we need to solve algorithmic problems practically every week. If you are unable to code yourself out of a DP problem or scared of NP completeness and approximation algorithms, then maybe find a different job instead of complaining about the interview process?
1. This is Guido van Rossum. If I were him and asked to solve puzzles, I'd tell the hiring company to fuck off.
2. These quizzes aren't so bad, but the pressure and stakes make it incredibly stressful. There's no standard, and often times the interviewer is the one that sucks.
> We do foundational work in many teams and we need to solve algorithmic problems practically every week. If you are unable to code yourself out of a DP problem or scared of NP completeness and approximation algorithms, then maybe find a different job instead of complaining about the interview process?
I'm pretty sure your opinion here is not that of your employer.
When I was leaving Google the first time, I asked my skip lead (who was employee #48 there, ended up running all of Search, and was previously a core HotSpot engineer at Sun) why he chose to work at a small startup when, coming off of HotSpot in 1999, he could work anywhere. He replied "Aside from them being one of very few companies with an engineer-centric culture, they were the only company that required I interview. Everybody else was willing to hire me on the spot."
For some personality types - and particularly the ones likely to do world-class work - being challenged is a positive sign. It means that the employer does their due diligence, and they will mostly be working with other people who react positively to a challenge.
> It means that the employer does their due diligence, and they will mostly be working with other people who react positively to a challenge.
To me, due diligence would be more like using software that someone has created. If it feels snappy then they're good enough at algorithms for the kind of software that they create, if it doesn't then maybe it's worth looking into whether or not there's a good reason for that.
Like if you apply for a job at the NYT, I doubt they make you do a timed writing test with people staring at you and asking you questions in the middle. They probably just read some of the previous work you've done.
How he solves the problem doesn’t matter. You don’t care in the interview if he had the answer memorized or if he fumbles through it.
I do not want to work with anyone who finds that getting their hands dirty is beneath them. It is very, very rarely going to be a good use of their time to do those problems. It will often be a good use of their time to teach those problems. A senior engineer, even one who’s unlikely to work with junior engineers on a regular basis, will need to explain their thinking. They need to show humility and compassion. Those are practiced attributes. This precise situation is the best practice you can get - New and Unknown person, some amount of challenge and complexity involved.
Thinking that whiteboarding problems are a bad use of time is a very strong signal for a senior person who is out of touch.
At an old job, my boss was moving desks and he came across an extra copy of CLR "Introduction to Algorithms" and he asked if anyone wanted it. As he was my direct supervisor, he said he'd give it to me only if I promised never to open it, and only to use it as a monitor stand.
I can see that. When I interview, I often compare the difficulty of the questions that different companies. I have noticed that I feel a little more respect to the companies that ask the more challenging technical questions (not puzzles) vs. the ones that ask the super basic ones. It does make me think that the ones asking the simpler questions are likely getting lower quality candidates and that I would be joining them.
Counter-counter-point. An engineering interview has a non-neglible amount of randomness. Maybe you get a grumpy interviewer or a noob interview, or your brain freezes over.
When you are considering hiring a nobody, this is acceptable. You will interview multiple people, and they will interview at multiple places, so the randomness isn't that important.
But if you want to hire one specific guy as a strategic hire, suddenly the randomness may no longer be acceptable.
When I left uni, I got around 15 job offers. I went with the one with the lowest pay, because that's where I had go to through the most difficult interview process.
(Unfortunately this happened in a small Eastern-European country, so the company was an investment bank, not Google.)
I think the best experiment would have been to have him apply blind where the interviewers did not know he was Guido -- and see how he fared on the technical interviews.
> I think the best experiment would have been to have him apply blind where the interviewers did not know he was Guido -- and see how he fared on the technical interviews.
That would be amusing, but he'd have to be disguised, as he's rather recognizable, having done a lot of 'State of the Python' talks and the like.
> This is Guido van Rossum. If I were him and asked to solve puzzles, I'd tell the hiring company to fuck off.
Not sure what you're trying to get at:
MacOS homebrew creator is an effin nobody compared to Guido, therefore he should "know his place", "get in line" and invert a binary tree on the whiteboard and act like an obedient tech interview candidate that he really is?
OR
MacOS homebrew creator should've told Google to fuck off?
Imagine a University asking a Physics Nobel Laureate to solve QM Problems from an undergrad textbook in order to get hired as a Professor. It would the height of lunacy and incredibly insulting.
Guido is not a Physics Nobel Laureate. Going with the Physics analogy, Python is more like an overgrown masters level project, not a Nobel prize level by far. He did a good job at growing the Python community, and this is a great achievement! It requires certain personal traits not everyone has. But at the technical level, he made many beginners mistakes when designing Python, which he tried to fix later, but not always successfully.
An overgrown masters level project, eh? You could probably say the same thing about the founding of the United States!
"The US constitution is like an overgrown enlightenment dissertation. The founding fathers did a good job at growing the United States, and this is a great achievement! It requires certain personal traits not everyone has. But at a technical level, they made many beginner's mistakes when drafting the constitution, which the country tried to fix later, but not always successfully."
It may come to you as a surprise, but for an outside observer, who hasn't been indoctrinated at school by the religion of American exceptionalism, the US might not be a very good example. Think of American military-industrial complex that apparently defines the country's foreign policy.
My analogy was meant to cut both ways, and serves as much as a criticism of the US constitution as it does a compliment!
You'd certainly be wrong to assume that Americans are all in favor of our foreign policy, to say the least. Aside from that, some of the downsides to the US constitution that inspired my comment include the way the founding fathers totally failed to anticipate that the nation would be completely polarized by a two party system, and that this polarization would happen along geographic lines.
Admittedly, this compromise whereby rural states with lower populations enjoy disproportionate political representation is baked into the constitution being agreed to in the first place (the 3/5ths compromise being relevant here as well). As we've moved to more direct democracy, with things like the electoral college being bound to the (local) popular vote and the direct election of senators, the original intentions of having a federation of mostly autonomous states becomes more and more anachronistic, while still fueling an increasingly polarized electorate that pits high-tax revenue and high population centers like SF and NY against low-tax revenue and low population centers that make up most of the country.
The United States is large geographic region with a heterogeneous economy. I don't know if a parliamentary form of government would have served this kind of country well, but certainly most friends from abroad who have spoken to me on the subject have implied that proportional representation is far more sensible than FPTP voting, and that parliamentary forms of government avoid the gridlock and polarization that our de facto two-party system engenders.
The number of users is not a metric that makes something Nobel prize-worthy. The exaggeration of the comparison with left-pad is to make this point more clear to you ~~ if you cannot see it yourself, it's sad. Also, going with "funny guy" and "check your head" is not a good argument btw, if you did not know that ~~
Homebrew is 11 years old. I'm willing to bet there are as many people (likely fewer) people who knew Guido in 2002, when Python was 11 years old, or even 2005, when Google hired Guido.
And I'm willing to bet when Google hired Guido in 2005, they didn't put him through a coding challenge humiliation clown show day.
This comment really just drives home the nail of how awful the state of interviewing, and especially the mental state of some interviewers, in this industry with a backhoe bucket.
I really just want to thank you for putting this useless mentality on display.
Next time there's a tech interview discussion and someone defends it, linking this thread will be very useful.
Almost 30 years of BDFL of Python, sorry don't care, go do 200 leetcode before talking to us. And if you don't spit out the answer a few seconds faster than that fresh graduate, clearly you're a lesser engineer and should be rejected.
I know it's not Microsoft, but D. E. Shaw asked Larry Summers math puzzles when he interviewed there. At the time, he was the president of Harvard University.
I think it's my favorite example of how crazy some interview processes can get, lol.
I’m not sure what’s crazier to me: asking someone to demonstrate a live proficiency of an abstract skill that is only tangentially related to that actual day-to-day activities of a role or just assuming that because someone has some high credential that they would be good in a given role.
DE Shaw is a financial company filled with maths guys, doing statistical analysis and modeling all day. A math puzzle is the most normal question you could be asked there.
The real question is why Larry Summers is going to a quant interview? Did he apply for a quant role?
For many roles at a hedge fund, being able to do mathematics quickly and intuitively is a valuable skill. Not sure why an economist applying for an MD job would need to be tested on that, though.
"Hello Guido thanks for coming in, we would like you to open up visual studio code and create a sudoku solver that can solve this partially filled out board"
Thanks for this link. I took a course on logic programming in school and it made a big impression. I found Prolog to be pretty mindblowing at the time and I'm happy to see it still is.
You must be injecting crack directly into your frontal lobe if you think that the creator of the world's third most important programming language being asked an algorithms question (unlikely) and failing it (entirely possible) means that he's unqualified to do "foundational work."
I had a Nobel Prize winner as a physics professor in college who got three successive different wrong answers when attempting a freshman physics problem in office hours. That doesn't mean that physics isn't the right place for him to work.
Yes, but frankly, C would still cause more havoc because even if the mainframes somehow continued to run, they would still be effectively inaccessible over any network.
We should not give too much importance to this, but the fact is that Python is now ranked consistently #1, #2 or #3 in sufficiently many rankings to consider it seriously.
Hi, I'm a programmer for Microsoft. I didn't have to answer silly algorithm questions for them to hire me. I'm nowhere close to VP/Distinguished Engineer level
(that said, my path to being hired did involve writing a sudoku solver, but that wasn't in an interview for a position at Microsoft)
Every story is unique. I was responding to a comment that was making sweeping claims
My story likely isn't replicable, but everyone has to find their way
I interviewed for Citus a couple weeks before they announced being acquired. I found out about the acquisition on Hacker News before having received an offer. They were able to get me in without going through the hiring process again
Initially I'd be moving to San Francisco but I wasn't eligible for any visas as I don't have a post secondary education. Staying in Canada's worked out
Some examples of things working out here: I was asked to implement some parsing, & well that's not so hard when you've written a Lua parser a year beforehand: https://github.com/serprex/luwa/blob/master/rt/astgen.lua (an astute reader will notice I don't handle precedence here, which is pretty important for arithmetic parsing. That's because I opted to implement shunting yard during codegen phase)
What value does this story give a reader? I only think it'd serve to continue an argument about how not everyone is so lucky, which isn't the original argument of "unless you're VP/Distinguished Engineer, not even Guido van Rossum gets to skip whiteboarding"
I know we're talking about FANG itw but come on you really think people like Guido needs to do any of that? They probably don't do "regular" itw, they go to a restaurant with some important people and that seels the deal.
Every body is asked some thing close enough. At higher levels, often focus is not on coding, but enough depth of design where a person of such profile might end up educating the interviewer - while satisfying their requirements.
That being said, ability to solve coding problems efficiently (not necessarily spit A* graph algo in sleep), but a decent close to real life coding challenge is fair game.
Thats certainly invalid. People with 20 years of experience or more, are asked questions based upon the role they are interviewing for.
If you are applying for a principal or higher engineer role and your job involves coding, you are asked coding questions. May be not just focused on a complex bookish algorithm only, but rather more close to a real life distributed programming / synchronization problem etc. for example.
Why the fuck would CS graduates be afraid of DP problems, NP-completeness or approximation algos??? That’s literally the table stakes of our profession.
The problem is rewarding rote memorization in a whiteboard interview, at the expense of actual understanding, and ability to research the problem.
> Why the fuck would CS graduates be afraid of DP problems, NP-completeness or approximation algos??? That’s literally the table stakes of our profession.
If the profession being discussed is "academic work in Computer Science", sure.
If it is "software development", those things absolutely are not really "table stakes".
If it is "software engineering", then I don't think there is broad consensus on what that profession even is, much less what table stakes in it are.
> Are you seriously claiming Guido van Rossum was hired for software development/engineering..?
No, nor do I think he was he hired for the other thing I discussed, academic computer science. The post I was responding to made a general comment about "CS graduates" and "our profession"; I was responding to that. Whether that post itself was material to, or merely tangential to, the discussion of GvR's hiring at Microsoft is an argument that, while perhaps interesting to some, was not the focus or concern of my response.
Because these problems can have some subtleties that are hard to get right in a high-pressure environment. Some interviewers will completely write you off for small mistakes.
That sounds interesting! Do you have any examples?
I would have thought that if you actually needed people to perform under pressure, you would design your test explicitly around that, instead of using “comfort with the whiteboard” as a proxy...
That might be a fair statement for some roles, but what about the second part where you code it up and write the code on a whiteboard w/o running/debugging and have to get it right in 45min?
Yeah, they only put the fan in MBP to annoy people, of course. It has zero use. Go order the Macbook Air and have fun. Maybe you can fry your breakfast eggs on it too.
Oh I am sure judging by how excellent MacBook Pros cool things, having passive cooling will make no difference at all. I mean, during summer I put my MacBook on a large ice block that I freeze over night, this way I can maintain acceptable build & development speeds during the day. Should work the same for the MacBook Air. No?