Hacker News new | past | comments | ask | show | jobs | submit | dancric's comments login

There are lots of good comments here, and as the writer, I do appreciate all of them.

One element of this story, which was hard to really spend the time on, is the public's difference in perception regarding disruption of other technology companies, compared to its perception of industrial and service based industries. The public doesn't seem to care when a company like Intel takes on a company like Fairchild Semiconductor. Part of the issue is a lack of technical sophistication, so it is difficult to separate competitors based on their products. The more important reason, though, is that technology devouring technology is easily understood as progress. 500 engineers lost their job at one firm, but a new firm is hiring 500.

Now take a look at the service disrupters like AirBnB, Uber, etc. First, unlike technical disruption, the public understands the businesses here very well. It's a hotel. It's a taxi. It's a laundromat. Second, there is a distinct feel that these new companies are not playing by the rules, whatever those rules might be (it doesn't help that these companies publicly flaunt the rules either). Third, and most importantly, there is far more perception of the people losing their jobs, rather than the gain these companies are making in terms of labor flexibility.

There are plenty of greenfield companies out there (Nest, DeepMind, Climate Corporation are acquisitions in the last month that come to mind). But the region is not exclusively doing that kind of progress anymore, and so we shouldn't be surprised when people aren't immediately positive about the changes taking place anymore.


I think that you're painting "the public" with a very broad brush. A more accurate model might be that people are happy when new options become available to them, ambivalent when stuff goes on that doesn't impact them, annoyed when people speak negatively about what's important to them, and angry when what is important to them is taken away from them. This is a pretty universal model, but the specifics differ depending on what is important to a person.

My girlfriend and I love AirBnB. It's allowed us to visit some pretty remote locations on very short notice at very reasonable prices. I have a friend who is surviving off the income from AirBnBing out her apartment. Probably, her neighbors don't like it. The hotel industry certainly doesn't like it.

I have friends in SF who similarly love having Uber available, because it's got them home late at night after a night at a few bars. As part of the yuppie tech demographic in the Mission, they are also hated by some of their neighbors.

What's changed isn't what's going on in the world, it's who is now angry enough to speak to the media. Somebody who finds a great weekend getaway on AirBnB isn't going to write a story about it or talk to a TechCrunch reporter; they will write a review on the site so that other people can have a similarly great experience. Somebody who loses their job because their hotel can't compete against AirBnB is both plenty angry and now has plenty of time to complain to the media.

I'm reminded here of reading Foucault in college, and specifically the role of the discourse in society. Foucault's central theory was that control of what can be said in the public sphere reflects power dynamics in a society. When public mindspace like the media starts leaning in a certain direction, it doesn't necessarily mean that reality has changed underneath. Rather, it means that certain interests have organized and care deeply enough about a certain issue that they're willing to spend time making sure that public belief swings a certain way on that issue.


The book, "Anatomy of an Epidemic", provides a really comprehensive look at how we ended up in this situation. Some of the keys here:

1) A desire to bring a level of "science" to a part of our physiology we don't understand. The thinking here is that while we do not understand the etiology of depression, we can at minimum begin to use blunt tools to solve problems. The issue as anyone who has studied complex systems understands, is that the feedback loops are so dense, there is no method to understand what is happening.

2) Financialization of treatment – drugs make more money than therapy and other methods. Or to use an HN phrase, drugs are more easily scaled to the population than other methods. The incentives throughout the entire system push people this direction, regardless of the underlying research.

3) Treatment doesn't happen instantly in any case. The issue with much of the research today is that we take a very limited time window to evaluate the efficacy of different treatments. If, instead, we looked at treatment over the life course, the results are often radically different.

This is where startups like Seven Cups of Tea will hopefully play the world. This mental health crisis offers a huge opportunity for disruption and creativity. As a quote in the Stanford alumni magazine said this month: "One hundred years from now, people will look back at the age of giving SSRIs and they will have a reputation that's akin to bloodletting."[1]

[1] http://alumni.stanford.edu/get/page/magazine/article/?articl...


>A desire to bring a level of "science" to a part of our physiology we don't understand.

Confer:

http://thelastpsychiatrist.com/2006/11/massacre_of_the_unico...

http://thelastpsychiatrist.com/2010/01/the_massacre_of_the_u...


"...they will have a reputation that's akin to bloodletting"

Here's to hoping so! And sooner than that.


One other consideration is the number of active accounts in various countries. I imagine the penetration in the United States is one of the highest, and the US population is fairly large as well. It would be interesting to get some sort of a relative factor here to get a sense of how aggressive these governments are seeking information.


As a developer on the Python stack, I would love to know when would be a good time to start using Go in serious production work. It seems to me that it solves a lot of the backend services infrastructure problems associated with interpretive languages (one of the reasons I was considering diving in Scala or other JVM languages), is relatively reliable, and has a fairly strong core library. It still seems bleeding edge, but the language seems to have developed far faster than Python did over the last decade or so.


The analogy is a little flimsy, but I'll run with it anyway: I consider Go today to be similar in some ways to Java at around the time of Java 1.1 or 1.2.

Obviously, Go is modern and is in many ways better than today's Java 1.7. But I am trying to illustrate its maturity level and the trajectory that I believe it's on. If you recall the days of Java 1.1, it was already seeing a great deal of early traction. The early traction of Go seems roughly the same to me. Also Java in its 1.1/1.2 years was on a clear trajectory to become a dominant language. I think Go will only grow in popularity for years to come in the same fashion. Even as a primarily Java developer, I look forward to Go being a clear and viable alternative.

I could be wrong about the trajectory, of course.

But I believe a short answer to your question is: if you're considering it, take some time to actually do something with Go. At first something experimental, then something for production use.

As a long-time JVM user, I've been trying to explain to other developers for a while now that assuming you use a modern approach to Java development, the performance of the JVM allows you to be (in my opinion) even more efficient than a dynamic language because you can code your application fairly recklessly. You can defer optimization in all of its forms for a long time, perhaps infinitely. The resulting mindset is a dramatically reduced concern about performance. When I work with most dynamic languages, I can never fully set aside the inner voice saying, "this is going to perform like crap." Trouble is, the voice is often right.

Go brings the same ballpark of performance as the JVM and a style that I believe is more appealing to Python developers than a modern Java stack (although I don't think modern Java stacks are given much of a fair shake because of Java's legacy, but that's a separate rant entirely).


And like Java 1.2, it is still missing some critical deal-breaker features for a lot of people.


Exactly. The maturity level is about the same--obviously in different ways--but I think you get what I mean.

Will they ever add generics? Not sure. Will Java ever have proper first-class functions? Not sure.


why generics? Have you really understood how to write Go? Generics are not needed, you have interfaces.


To be clear: I don't write Go professionally and I am not part of the "Generics or bust!" advocates. I'm agnostic.

I simply used it as an example of something that many would point to as evidence of Go's maturity level. If the language maintainers don't ever add generics to Go, I think I'd be comfortable with that. And if that's the way it plays out, eventually the design decision will be seen as firm and not a sign of immaturity.


Well, then write for me a type-agnostic map function that does not rely on introspection.


If you understood how to write Go, you would write an imperative ad-hoc loop instead of composing generic functional combinators. But you have to be mature enough to jump over the shadow of your functional pride and write clean imperative code.


Of course you can write a 'for' loop. You can also write a goto in C. I was replying to my parent who said 'you don't need generics, since Go has interfaces'. I wanted to point out that interfaces are not a general substitute for generics.

But you have to be mature enough to jump over the shadow of your functional pride and write clean imperative code.

By that reasoning we can go back to assembly ;): you just have to be mature enough to jump over the shadow of your portability pride and write clean assembly code.

Abstractions exist to help us and in that respect Go feels like a throwback to the past. It's pretty much a Java 1.0 that compiles to machine code.


Loops impose order. Maps do not.

Maps are in principle trivial to parallelise. That would be a nice feature.


Agree, and a map also provides an "at a glance" assurance that I'm getting a transformed array of the same size. For loops take longer to discern what they're doing just because they could be doing almost anything, including early exit.


This might help you:

http://blog.repustate.com/migrating-code-from-python-to-gola...

I love using Go and I'm a Python guy through & through.


> I would love to know when would be a good time to start using Go in serious production work

Now would be a good time. No language, runtime, compiler, library, or framework is ever going to be perfect, but now is a great time to dive in.

> It still seems bleeding edge

This is probably a good thing in many respects because Go doesn't have the baggage from yore, and it was created by some pretty smart and capable people.

> but the language seems to have developed far faster than Python did over the last decade or so

Language designers are getting better at marketing. No language succeeds without fantastic marketing.


This is probably a good thing in many respects because Go doesn't have the baggage from yore, and it was created by some pretty smart and capable people.

As was Javascript plus Node.js two years ago, Ruby and RoR five years ago, etc.

You'd think that more reasons are required than 'it is new, doesn't have baggage in was created by smart people'.


Go isn't a framework. NodeJS is a framework, as is Rails. JavaScript and Ruby are very old languages, both laden with baggage.


node.js is not a framework, it's a platform written in c++ and js.


Yeah, you're right. Thanks for pointing that out.


Exactly, there is never a better moment to start learning something new then now.


I'm using Go in production, and migrating existing Python services to it. I've found nothing wrong with using it for "serious production work". The ecosystem obviously isn't as mature, but it's getting there. There don't seem to be any gaping holes.


Only one way to know that: give it a try!


Good point!


When considering this type of transparency, there is a spectrum of types of people. Some prefer radical transparency where every possible pitfall and challenge is openly acknowledged and noted. On the other side of the spectrum, a lot of people don't want to have any knowledge of the challenges, and in fact, work most productively in a state of naiveté.

Part of hiring is understanding how different people respond to this sort of stress. For me, I really prefer this sort of off-the-record honesty. I know a start-up is challenging, and a failure to acknowledge a reality that I know exists is a huge turnoff. But I can definitely imagine that some hires would find this scary and would find a startup that seems much more "perfect." You attract who you want to attract.


Got to hang out with the folks at Rough Draft last night – I am glad to see that their passion for helping entrepreneurs is finally coming to fruition. Keeping in mind that these are students making these investments (many who were simultaneously studying for midterms this week), it is truly inspiring to see such energy in the Boston ecosystem. Keep up the good work!


Thrun seems to be getting into this mode as well with this line: "He’s thinking big now. He imagines that in 10 years, job applicants will tout their Udacity degrees. In 50 years, he says, there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them. Thrun just has to plot the right course."

Why is it that people (even apparently faculty like Thrun) seem to forget that universities deliver more than just undergraduate education? "Delivering higher education" also means research labs and centers for scholars, graduate education, professional education, executive education, etc. While undergraduate education may be ripe for disruption, there is a serious leap needed to go from there to the complete disappearance of thousands of institutions.

Just look at the numbers: dederal research grants and endowments will sustain at least several hundred universities for the long-term, and other universities that are currently teaching-focused will move more of their efforts to research as students take online classes and stop being paid customers.

The institutions that should be worried are technical colleges (depending on major), community colleges and perhaps state college systems. They are the most likely to be disrupted, particularly if Udacity could offer a more comprehensive curriculum. But Stanford? Or even schools like the University of Florida or UC-Santa Barbara? They have plenty of other income sources, and still have a lot of life in them.


In your world view, places of higher education and places of advanced research are strongly coupled. This does not need to be the case. Historically some great research labs had nothing to do with education (Bell Labs). I think Thrun's prediction is that there were be great institutions of higher education that have nothing to do with research. I certainly agree with that point.

Of course, all disruptive innovation starts at the bottom of the market. I am sure the first big wins will be giving educational access to talented and bright but poor kids in the developing world. For them, even something like access to a first world university is far from affordable, and the fledgling online classes are much better than the alternative, even if they aren't yet competitive with traditional university education.

I took the online ml-class last semester, and I am now taking the pgm-class this semester. I have or am taking graduate versions of the same classes at a traditional university.

The big discovery (thanks to Sal Khan) is that teaching fundamentally scales. It's a big waste that all over the world, there are hundreds of smart researchers each teaching 10s to hundreds of students basically the same subject. What would be best is to get one great teacher for each of the subjects of those 100s researchers, make him focus really hard (10x the effort he would have put in, but 1/10 the total effort of all of the previous combined effort) to make a ridiculously good online class. Then you have freed up a bunch of time for the researchers, and most of the almost thousands to 10s of thousands of students are actually experiencing a better class.


I think you exemplify the more likely scenario, in my view, of the typical real, down and dirty use-cases for this stuff. That is people that are already undergraduates and graduate students using this as supplementary material either to their current classes or more probably extra-curricular.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: