Hacker News new | past | comments | ask | show | jobs | submit | digitalmerc's comments login

Videos not working for anyone else?


The videos are hosted on Dropbox. Dropbox disabled the videos because there has been too much public traffic.


Hosting publicly accesible videos on dropbox is not the best strategy.


Unless you have a pro account there is a 20 gig/day bandwidth limit. With pro it's 200 gigs.


ugh can't wait till it comes back up. When it does, I'll be sure to lose many hours of my life to this :)


Ah sorry, we were blocked by Dropbox! We weren't expecting this much traffic and we're moving our videos over to AWS soon. We're going to use Box as a quick fix in the meantime.


This videos on Box don't work very well unfortunately: I can't pause and resume a video. Every time I miss any information I have to reload the page and start from the beginning of the vid....

Otherwise, great and I like the one video I watched. Thanks.


... Youtube?


Out of interest, how much did the domain cost?


Exactly. I wouldn't either, but they said "when they ship". Pre-ordered one for my lady friend as well.

Doesn't say on the site, but is there an expected ship date?

Ah says in the email:

> You’ll only be charged once your Automatic Link ships in May (for iPhone) and Fall (for Android).


One thing that always comes to mind, and this may just be the pessimist in me, but remember the "flash crash"? Where a bunch of trading algorithms triggered short sells as a reaction to other trading algorithms triggering short sells?

Well think of that, but with cars, with people in them...

Edit: I think it's being misconstrued that I am somehow against Google Cars, when in fact I'm very much for them. Regardless of the likelihood of the aforementioned scenario, I agree that it's still far better than the wildly unpredictable human factor. And ultimately I think that self-driven cars will be a boon both for road safety as well as fuel economy and overall emissions. (Not to mention traffic, I can't wait for a world when traffic is basically non-existent)

One thing I realized after making this comment too, is that road situations are far easier to predict than the randomness of the market, and the consequences are much higher than 0's in a bank account, so I'm sure there will be fail-safes.


The only thing those two have in common is computer code. It's a poor analogy at best.

When it comes to making accurate split-second decisions, I'll take an algorithm over a person any day of the week.

Also, garbage in garbage out. et. al.


"When it comes to making accurate split-second decisions, I'll take an algorithm over a person any day of the week."

You're assuming too much. Just because it's an algorithm, doesn't mean it'll know what to do in every situation. It'll take a lot of work to develop a set of algorithms that can handle all the events that confront a driver on a regular bases.

Don't get me wrong, I would be comfortable being driven by a computer but not just yet. Before I say "I'll take an algorithm over a person any day of the week." I want to make sure that algorithm is tested and works well.


By the flash crash, you mean that market event where something went wrong for 20 minutes (due to a human error), but the system recovered all by itself before the end of the day (thanks primarily to electronic systems)?

http://www.chrisstucchio.com/blog/2012/flash_crash_flash_in_...

That event which was exciting if you were an HFT, but which you can't actually find in a daily stock chart?

I'm hoping that self driving cars will be as robust as our electronic trading systems.


That is an awful comparison. A bunch of trading algorithms designed to compete and get an edge on each other. Google's cars aren't programmed to race everything in their path, they're designed to be as conservative as possible.


That wouldn't happen because they aren't networked together for google's car. You are probably imagining that each google self driving car sends it's driving data to the cars around it and that is how they avoid accidents. That's not how it works though (although it would probably be useful in the future). The way it works is that the car has a giant radar attached to the top and it maps out a real time 3d map of it's surroundings which is uploaded to google's servers to update the world map in real time.

edit: Also to add, the reason that flash crashes happen is because the algorithms are unaware of what the other are doing (although it can try to guess). Also, self driving cars are designed to follow laws not beat the competition.


Well I think I have it right, while the trading algorithms are networked, they essentially operate independently of one another. So just how a trading algorithm reacts to market movements, the radar on top of the car would react to road conditions and other drivers.

I couldn't even imagine them all networked, although it may actually be better, it seems like even more danger.


That actually isn't a problem for the radar because it has 360 degree awareness. So if a car veers into your lane and you are wedged between two cars, instead of veering to the right the self driving car would slow down. So basically your argument is actually an argument FOR self driving cars. I imagine that if a self driving car is put in a situation where it cannot avoid an accident at all costs it could even manipulate the car to minimize the amount of damage that is caused.


> ...basically your argument is actually an argument FOR self driving cars.

Hah, yeah it kind of ended up being that way didn't it? :) Even though I don't feel I was arguing against them in the first place.


Assuming the cars are designed to practice basic defensive driving (especially keeping a safe distance), I don't really see a realistic scenario where this could happen.

Far more likely is a human driver screwing things up.


One issue with keeping a safe distance is that doing so means you travel slower than nearly everyone around you on the highway, since the space you leave in front of you is viewed as an invitation to merge ahead of you, meaning you have to slow a bit to open to a safe distance again, which prompts someone else to jump into the "empty" space...

However, I suppose a safe distance for an autocar should be considerably smaller than for a human-driven car, since the start of braking would be essentially instantaneous.


As the flash crash showed, if you screw up enough in the market, they'll just hit the reset button and undo the trades.

The same isn't true for cars, and that'll likely affect the code testing process.

Plus, if you're going to reference the flash crash, I'll point out that people, not computers, caused the Great Depression.


That's a good point. For every Flash Crash we've seen, how many human-error market crashes have there been? For every algorithmic auto crash that could ever happen, how many human-error auto crashes have there already been? Just 24 hours ago I had to go pick up my girlfriend from the side of the road after someone swerved into her car after not looking in his mirrors. The complaint that algorithms might cause an unintended car crash so we should continue to only have people behind the wheel rings a bit hollow when people-driven cars is one of the biggest killers in Western society.


...people, not computers, caused the Great Depression.

Not to mention the flash crash...

http://www.sec.gov/news/studies/2010/marketevents-report.pdf


Actually, I'd say that the way people currently drive, at least from the mid-atlantic to the north east, is more likely to cause a "flash" crash than a non-emotional computer algorithm. (I'm looking at you, CT drivers, and the 95/287/turnpike/GSP area)


So you'll have a crash once a year. Big deal. At least a thousand other crashes prone to human error would be prevented.


I'm going to disagree with everyone explaining how a flash crash isn't a valid analogy here. It actually is, although the word "crash" in this case is semantically overloaded and it does not mean that a true automotive crash is the likely result.

Both the market and the road have a number of autonomous agents interacting with each other under some rule set. The nature of the players has an enourmous impact on how the game is played. A set of humans is physically incapable of "flash crashing" a stock market because they are literally physically incapable of trading fast enough for that to happen. The introduction of other effectively-autonomous agents into the market changes the nature of what the collection considered as a whole can and does do.

It is true that introducing computer-controlled cars onto the road in quantity will almost certainly qualitatively change the nature of driving on the road, and it is valid to be curious or even concerned about what this effect may be. It would be particularly bad to imagine that all the cars are running the exact same code; it is a completely valid concern that one particular bug could be trigger which could cause mass failure of some type, including true automotive crashes, but possibly also just being software crashes. If you dig into real automotive code, you'll find similar things that have happened in real code.

Long term, I think it is likely to be a net positive effect. You'll have a lot more drivers on the road taking what will probably be a very conservative approach to driving, with much more careful management and maintenance of margin for error, including in situations where humans tend to play fast and loose without even realizing it. It seems likely to me that computer cars will eventually refuse to drive in certain bad conditions, like icy roads, and that over time we will consider that to be an acceptable reason to not drive. But I do think we also want to be careful to ensure that there are many implementations of self-driving cars; monoculture has too significant a chance of a Black Swan event. But it isn't impossible we'll pass through a period where the net result is a bit more dubious.

There's a lot of corner cases to work out, and that includes things that today we wouldn't even consider. Suppose 99% of the cars on the road are computer controlled. What do they do when a teenager hops on an overpass and starts throwing paint balloons? What happens when teenagers start jumping into highways on a dare? As the system becomes a computer system rather than a human system, we must also consider how it will be attacked not only by the real world, but by humans as well. Google's really being too optimistic here. They've made enormous strides, truly enormous strides, and now we're seriously talking about them as a thing that may happen for real, rather than the ever-nebulous "someday", and that's big. But they've got a long way to go before we can truly put them in the hands of the public.


A set of humans is physically incapable of "flash crashing" a stock market

False. They did it in 1962.

http://online.wsj.com/article/SB1000142405274870395760457527...


IMHO, they call that a flash crash because by 1962 standards, it was. But I'd say there was some significant qualitative differences. The Wikipedia article on the 2010 crash [1], for instance, talks about things like "At 2:45:28 pm, trading on the E-Mini was paused for five seconds when the Chicago Mercantile Exchange ('CME') Stop Logic Functionality was triggered in order to prevent a cascade of further price declines." Emphasis mine. Markets have always been able to crash quickly, but without computers you're not going to get phrases like "paused for five seconds" to mean anything.

[1]: http://en.wikipedia.org/wiki/2010_Flash_Crash


Actually it does, and spot on to boot.

He's not saying "don't find a good programmer." He's saying that the questions they ask get them no closer to finding out whether he is a good programmer.


He's saying that the questions they ask get them no closer to finding out whether he is a good programmer.

A lot of the time they're not meant to. They're meant to figure out whether he's a bad programmer (or bad cultural fit, or something else bad).

The cost of hiring someone bad is way higher than most people would guess. I've come to believe that in many cases the questions asked in an interview almost don't matter - they're just there to give you an opportunity to fail, or to say something incredibly dumb or ignorant or obnoxious so that the interviewer can quickly reject you and avoid an expensive mistake.


If you hire someone who is boring and has no obvious problem but isn't very productive, you have your ass covered. His lack of productivity is his fault. If you hire someone exceptionally useful with some notable problem, or even just some notable flavor someone doesn't like, it is something you should have noticed and you are liable. A lot of companies want an unblemished calf more than they want a Zed Shaw.


As long as the interviewer treats it like that I'd be fine with it. If they are playing things the way you describe then answering a question with "why does it matter?" would be a viable response and you'd be open to discussion about it.

The real world problem is that too many interviews follow the model because that's what they know from past experience themselves. They may not know why they ask such baseless questions, all they know is the answer they want and damn you if you don't give it to them.


"I'm trying to be less hyperbolic"...

"I'd be honored if you followed me on Twitter."

While I agree with the thesis, and I'd like to start doing so as well, that was kind of funny.


You know when we built [PDFzen](https://pdfzen.com), we used Bootstrap and made it Metro-esque. I wish we'd have had something like this for the homepage. It certainly would've sped things up.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: