Hacker Newsnew | past | comments | ask | show | jobs | submit | skarap's commentslogin

Python isn't weakly-typed.

> Over the years, I’ve used Perl a lot (that one doesn’t care if it’s int or string… no, correction, in Perl everything is a string, ints just don’t exist. Well, kind of). It’s probably the language designed for throw away coding. I’ve done some Python too (that’s like Perl, but with proper objects in it, and everything is a dictionary there).

The author has almost no real programming experience with Python. Perl experience seems to overlap with sysadmin-related work at least partially, where it's usually used as better Bash. All their repositories in GitHub are Rust. So almost all of "real" programming they did used Rust.

Why would anyone who know Rust way better than any other language prefer to do their prototypes using anything else?


And the claim about how Perl behaves and is to be programmed in is totally, completely wrong.

For the start, one couldn't even write a correct "if" condition in Perl without knowing if it's about numbers or strings. Even if the comparison is over the "scalars."

Second, the numbers not only exist, one can control which numbers are used when (one can explicitly force integer underlying types for some expressions or code blocks, or keep the floating point calculation, or even turn on infinite integers).

Third, Perl has scalars, arrays, hashes, references, which have even different symbols throughout:

https://en.wikipedia.org/wiki/Sigil_(computer_programming)

so the types are more obvious when reading the language lines, much more than any language where every use of the variable looks almost the same.

That's why it looks "too much like line noise" to those who don't know the meaning of the symbols. But having these symbols makes the code somehow "firmer" -- the programmer must much more often write what is expected from some variable, and the expected behavior is more obvious when reading.


> Things break at random for reasons you can't understand and the only way to fix it is to find terminal commands from discussion forums, type them in and hope for the best.

Depends on whom you ask. There are people who use it exactly because when "things break at random" they absolutely can understand the reasons and actually fix it in contrast to some other OSes (or Linux from more recent years).


To repeat my comment from a previous discussion which brought a lot of downvotes: what happens when (not if) a self-driving car runs-over and kills someone (e.g. because of a software bug)? Do such cases cause criminal penalties? Who is penalized? Or will all cases of autonomous car accidents with deaths become civil cases? If so - do human drivers get the same new rules or if they kill someone by accident (because they got distracted) they still go to jail? Is that fair?

In this particular case I assume the operator will be thrown under the bus, which is also unfair.


Exactly. Asynchronous programming + promises + some unhandled failure case somewhere and you end up with timeouts instead of error responses.


So what happens when one of these cars runs over someone?

Tough luck, 5 million dollars to the family?


The same thing that happens when a driver runs over a person: the police exonerate the driver on the spot without investigating and file no charges.

http://www.berkeleyside.com/2018/01/12/breaking-pedestrian-d...


Do you prefer when there is someone to punish? What good does it bring?


I think the point is that there are still people to punish, but it will be harder for people to make that connection. Humans wrote the algorithms which “drive” the car; there’s nothing de novo here.


When I drive a car, I am liable for all accidents made with me behind the wheel. An acceptable risk.

When I write self-driving software, am I suddenly liable for all accidents made by all of those cars when my software is driving? Because the stakes there are quite a bit higher. Personally I wouldn't take the risk.

How to handle liability with a computer behind the wheel without strangling innovation isn't a solved problem as far as I know.


The companies are taking responsibility, so yes, they're liable and they must have faith in their code to go forward at this time.

The liability-handling is "deep corporate pockets". That's only a risk if there's inadequate improvement.


Justice is generally enforced by a human with a proven track record of being reasonable, a judge. I don’t think a judge will go after a CS intern because she forgot a semi-colon in her pull-request. A judge might very well go after a company that has a pattern of emails arguing that dodging around requirements on brakes saves money by not having to replace them as often.

In the case of manual driving, there is an understanding that humans might kill other humans when operating a motor vehicle and the enforcement focuses on a subset of precautions (alcohol, speed, lanes and, for professionals, continuous hours of work). For lane assistance and robot cars, there are no explicit list of guidelines yet, but there should soon be enough cases around: clearly communicating to drivers what is their responsibility; taking necessary steps (which might need professional bodies to define) when releasing something new, possibly a dedicated bureau. Cases will be more complicated because, presumably, only large corporations can handle those. The responsibility will most likely be focusing on testing practices and enforcement, not individual coders.

I suspect the closest structure will be pharmaceuticals & medical devices: it is currently acceptable to sell complex products made by large corporations that can statistically be related to thousands of people dying -- because they come with scientifically sound studies on samples, proving that they save more given a certain diagnostic. A public body defines how to prove that those help and private initiatives try to match those criteria, asking for experimental exceptions to medical practice.

Google has a habit of testing A/B on half of the world’s online population, so they might push against not trying more widely than small tests at clinical scale. There could be an interest in building a realistic simulator and testing code in there; building tracks able to replicate edge cases with mannequin -- things that actually already exist for the auto industry.

The prize is so large and the brand impact of being seen as unsafe too large for stakeholders to not find a reasonable solution.


It isn’t a solved problem, and no one engineer writes something this complex. The question of how groups of people, governments, corporations and so on can be liable in wrongful death is however. Rarely applied, but settled.


Nothing good to the person being punished. But it should deter from negligence. Monetary punishment only works on the poor.


Yeah because businesses and the wealthy always disregard financial liability and the potential PR disasters of robots killing people on public streets /s


Business themselves don't cause them. They can try to have good processes, verification, etc, etc, ... but even then something can happen by simple human error.


Waymo says there are 1.25 million deaths worldwide per year due to car accidents. 94% of them have at least some human error component.

Honestly, we have to get over this "someone must pay" mentality. If the rate of deaths falls from 1.25 million to 125,000 that will be a 90% drop in deaths. We should not be looking to punish Google or other companies for their contribution in reducing deaths because the status quo isn't good.


Is it worthwhile when the tech could save tens of thousands of lives altogether?


What happens when a sober driver, following the rules, runs over someone? I suppose the same will happen here. If negligence will be found on the part of the engineers, that could get tougher though.


The email you're searching wasn't posted to the list. You won't find it - no matter the interface.


One can read any amount of similar articles, but if they don't understand the data model behind all this, git will remain a complex and fragile beast for them. The feature-set is huge to be able to just remember all possible commands for all possible scenarios. If you don't know what index is or what a branch is, this all will look like a bunch of nonsense. On the other hand - your ruby code won't seem too logical either if you don't know the language.

So if git is the tool you use to get your job done - don't hesitate to spend a day or 2 on reading how it works and how you're supposed to use it.


> Deciding to hire both, and then assigning them to the same team is clearly the wrong solution

Couldn't Microsoft also get into trouble by hiring just one of them (the other one could have a good case for being treated unfairly)? And what if they don't have 2 separate teams which need new members? Should they create a new team just to make sure they don't heart anybody's feelings?


I do actually sympathise with Microsoft here, and they don't have a lot of good options, but I still don't think they picked an acceptable option.

> Couldn't Microsoft also get into trouble by hiring just one of them (the other one could have a good case for being treated unfairly)?

MS has no obligation to hire any interns, and ultimately they have to make a judgement call about the interns honesty, integrity, and suitability. If you think one of your interns is a liar or a rapist (or even just lazy or incompetent!), then you have an obligation not to hire them (and no, that doesn't open you up to legal risk). If you have two interns, and you can't figure out which one is the liar (or rapist), the obvious choice would be to hire neither.

The situation is unfair to MS, and to at least one (and maybe both) of the interns, but even so, hiring them both and then putting them on the same team seems like the worst possible way to handle it. How could the team possibly function given that history, regardless of who's telling the truth?


> Rape is much more common than false accusations of rape.

You're throwing away a datapoint - "the police decided not to press charges or Bob was found not-guilty in court".


> Once a judge catches a lawyer in a lie the judge will question everything the lawyer says.

Not a lawyer, but isn't is taken for granted that lawyers (along with everybody else) are lying in courts and that it's the jurors and judge's job to find out who is lying?


No. Lawyers are officers of the court. Especially in civil cases which are just about the balance of evidence, their role is to put their side of the truth in the best light. Not to just make up whatever bullshit they think might win.


Even in criminal cases:

> the defense lawyer may not lie to the judge or jury by specifically stating that the defendant did not do something the lawyer knows the defendant did do.

[source: https://www.nolo.com/legal-encyclopedia/representing-client-...]


Absolutely not. The exact opposite. That's why this is such a problem for Uber's legal team.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: