Hacker Newsnew | past | comments | ask | show | jobs | submit | more alex_dev's commentslogin

Hence why the above poster said different use cases and why comparing between them is ridiculous. Using ObjectEach() for every field to reach parity with json.Unmarshal is unconscionable.


> "Not everyone feels this way. Some critics recoil, in part because the effluent is released into local sewage systems."

Don't people know what happens to the body's blood during embalming? Goes down the drain.

I've already told my family that this is my preferred burial method.


Last I heard was Kaggle runs atop Azure and is heavily a C# shop. It'll be interesting to see the transition to Google Cloud if that's the case.


I can confirm that Kaggle runs on Azure because I block all Microsoft IPs (to avoid the ninja Windows 10 upgrade) and must disable the blocker in order to go on the site.


As skrebbel said, don't they charge for the upgrade now? That said, Never10[1] was (still is?) a great tool to prevent the Windows 10 auto-upgrade. Also, according to the Never10 page, Microsoft now has an optional update to get rid of the GWX stuff.[2]

[1] https://www.grc.com/never10.htm

[2] https://support.microsoft.com/en-us/kb/3184143


> to avoid the ninja Windows 10 upgrade

What ninja upgrade? You always had to opt-in. Yes, they were really pushing the offer annoyingly hard, but I had no problems whatsoever to keep one of my machines on Windows 7.

Anyway, you can stop doing so now, the time for a free upgrade is over.


This is incorrect. There was an opt-out phase where the Windows 10 install started automatically in the middle of work. I've experienced this myself, there's a moment where Windows 7 just shuts down and starts installing Windows 10 and I had to wait 30 minutes until I could press "I disagree" to the EULA and then it would start rolling back the Windows 10 it just installed.


Entirely off topic, but I thought they now charge for the Windows 10 upgrade and don't force it anymore?


They do charge now but you can get it free if you say you will use an accessibility feature.


Why not upgrade to Windows 10? It's my favorite Windows OS yet, and has me even rethinking whether I want our house to be all OS X...


This thread from a while back covers some of people's objections to Windows 10, outside the usual privacy concerns:

https://news.ycombinator.com/item?id=13555100


So you don't get windows updates?


At this point presumably a system not running Windows 10 is not getting updates anymore. Unless it's an enterprise install, in which case the ninja update is irrelevant.


I get updates on Win 7


This is a great idea. Adding it to my DNS blackhole as we speak.


It's really not a great idea. Either you don't run Windows, and it's not an issue, or you just blocked Windows Update and other important services Microsoft provide that work in tandem to keep your systems safe.


Blocking Windows Update sounds like a feature to me, not an issue.


> Either you don't run Windows, and it's not an issue,

Not a solution for those of us who run Windows boxes for various reasons...

And to clarify, I plan on occasionally letting updates through (I'm already on Windows 10) but this is a great way to prevent data collection / backdoor activation, which I hadn't considered. Seems like the simplest way to add a lot of privacy to Windows.


There are a shed load you can block without interefering with updates.


Yet that's not what the parent and its parent were talking about/implying. It clearly said "blocking all Microsoft IPs".

And considering the Windows 10 upgrade was being pushed through Windows Update I'm not sure how you'd want to prevent that specific update by blocking an IP and not interfere with Windows Update as a whole.


Makes sense. Azure LBs do not support ICMP and all ping packets are dropped. You can't ping any Azure-hosted services. Kaggle.com fits the description.


I'm pretty sure it supports ICMP, as TCP/IP cannot work properly without it. I guess you mean ICMP echo. Also there are like four kind of Azure load balancers and this is only true for some of them.


I can ping bing.com, but does that mean bing is not hosted on Azure? [Though it redirects to pinging something like a-0001.a-msedge.net]


It may just not use the Azure LB service (e.g. running HAProxy on virtual machines instead).


They are also known to have used F#, and even provided a testimonial to this effect: http://fsharp.org/testimonials/. Can't say if it's still used, though. That's two recent high-profile acquisitions (with Jet.com) for F# shops.

> At Kaggle we initially chose F# for our core data analysis algorithms because of its expressiveness. We’ve been so happy with the choice that we’ve found ourselves moving more and more of our application out of C# and into F#. The F# code is consistently shorter, easier to read, easier to refactor, and, because of the strong typing, contains far fewer bugs.

> As our data analysis tools have developed, we’ve seen domain-specific constructs emerge very naturally; as our codebase gets larger, we become more productive.

> The fact that F# targets the CLR was also critical - even though we have a large existing code base in C#, getting started with F# was an easy decision because we knew we could use new modules right away.


Google Cloud supports Windows, right? What would be the problem? (Honest question)


None whatsoever, unless they're heavily bought into Azure-specific services.

The idea that if you do C# you must be on Azure (or the other way around) has been outdated since Azure started. The first startup I ran tech at hosted C# on Mono in Docker containers on DigitalOcean and had devs on all 3 major OSes.


I'd be surprised if there isn't a decent amount of C# somewhere in the Google ecosystem.


I'd be interested if anyone knows anything about this. Especially given the recent updates to for running .NET core on Linux/Mac, a company like Google could make great use of C# without needing to shell out for Windows licenses.


Relevant 10 year old blog post [1].

[1]: http://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html?...

Don't know how true this still holds, but there was a time at least where it sounds like anything outside of C++, JVM languages and Python was off limits.


The IP of kaggle.com reverse DNS is cloudapp.net which is a Microsoft Azure domain so I think that this makes sense.


That's really interesting to hear. I wouldn't read too much into it, I was mostly just speculating. It's quite likely that they mostly scooped them up for the rolodex that is their user database.

In any case, congrats to the Kaggle team!


I think this may have something to do with Jeremy Howard's time as president there - I remember watching a few of his tutorials a couple of years ago when he was still at Kaggle and he was really into C#.


I wonder if Nest has support contracts for any Java 6/7 they are still using.


I've been paying rent with www.rentpayment.com which unfortunately serves up their home page with multiple logins over http. Naturally, emails and tweets to their support go ignored. Maybe they'll finally respond after more people ask them why they're "non-secure".


I wrote a short article on this topic with approaches for less tech savvy folks to set up HTTPS:

https://medium.com/punk-rock-dev/https-new-year-avoid-the-no...


I've been using this chrome app for many months and enjoy using it immensely. I do not need the bells and whistles of Plex. Furthermore, every time I attempted to stream a movie from Plex, chromecast had buffering issues.


Looks like it uses regexp... There isn't any benchmark code as one would expect when making a claim that it's "fast".


Regexp definitely isn't something you'd want to be using if you're primary goal is speed. When running tight loops in string parsing I've found using string splitting and then cycling through the range of indices in a slice was several times faster than Regexp matching. Obviously performance difference will vary depending on the expression and application but that was enough to convince me to think twice about future usage of Regex - as to whether the problem needed Regex or if I was just using them lazily. The latter being a practice I'd slipped into after years of Perl hacking.


That depends entirely on the regex implementation. If the implementation uses a DFA to match multiple regexes simultaneously then the performance will be as good as a trie because a DFA is more or less a trie.


> That depends entirely on the regex implementation

True, and anyone who knows that Russ Cox is a core member of the Go team will have a hard time suppressing a smirk when reading this :)

https://swtch.com/~rsc/regexp/


True. I was talking specifically about the same Regexp package as the one used in the topic project though.

I assumed that would have been obvious given the context however I apologise for not stating that in my comment and shall amend it appropriately. [edit: i can't add an amendment to my previous post now]


True. But nowadays most regex implementations are quite good (apparently go's is not - I haven't used it).

That said, most regex performance problems are PEBKAC. Writing a fast regex is hard and requires a pretty thorough understanding of parser theory. And many who use regexes don't understand that it's critical to precompile them for performance. You don't get a fast parser when you rebuild the DFA each time you use it.

*edit: a word


Go regexps are slow (https://goo.gl/r0K2xw ), the problem is not regexps but Go's implementation of regexps. So let's not blame regexps when regexps aren't the problem. Because by that logic, people shouldn't use the sort package as well ...


You're making a distinction where one doesn't need to be made. It doesn't matter if regexp is generally slow or if it's Go implementation specifically - if you're using Go and wanting something where performance is your primary goal then you're generally best to avoid using regexp.


> You're making a distinction where one doesn't need to be made. It doesn't matter if regexp is generally slow or if it's Go implementation specifically - if you're using Go and wanting something where performance is your primary goal then you're generally best to avoid using regexp.

Avoiding regexp doesn't fix Go's implementation of regexp. Making them faster does. Your argument is preposterous. If the Go team really cared about performances it would fix its regexp implementation.


I think you're missing the point of the discussion entirely. When you're more or less doing string splitting, using regex (regardless of performance) really is the wrong tool for the job. In this use case (url routing) a tree based data structure aka a trie or radix tree, are better suited.


> I think you're missing the point of the discussion entirely. When you're more or less doing string splitting, using regex (regardless of performance) really is the wrong tool for the job. In this use case (url routing) a tree based data structure aka a trie or radix tree, are better suited.

I'm not missing the point of the discussion. Using regex is not the wrong tool for the job. You deemed it the wrong tool for the job. And deeming it the wrong tool for the job doesn't fix Go regex being slower than in other languages. The 2 issues are not separate .People like you talk about performances as a goal while dismissing obviously performance issues in the standard library as "wrong tool for the job".

You're not going to convince anybody with this kind of argument, aside from gophers who already think like you do. I'm not one of them.


FWIW If the router was in C or Rust or .net (which has a Jit for their regex engine) I would still tell you Regex is the wrong tool for splitting a URL on '/'. How many people have to tell you facts for you to believe them? Forget your pointless anti-go bias. A regex, while perfectly good for certain types of pattern matching makes no sense here.

I often have taught the same lesson to my junior colleagues who use regex in Python or C++ code where splitting the string would be simpler, more maintainable, and faster.


> How many people have to tell you facts for you to believe them? Forget your pointless anti-go bias

Because a few people on HN is the consensus ? give me a break. You have your opinion, I've got mine, whatever you think you are you're in no way an authority on the matter. There are many ways to implement an http router, there are also many ways to implement regular expressions. A bad implementation isn't saved by deeming the use of regexp "wrong tool for the job".

> A regex, while perfectly good for certain types of pattern matching makes no sense here.

What make no sense is your petty comment.

> Forget your pointless anti-go bias

Pointing out facts is "anti-go bias". OK , how about you stopping drinking the "pro-go koolaid" ?


I think you're a bit delusional, I don't even write go (hence there literally being no koolaid for me to drink), but you're free to continue disagreeing, and continuing to be wrong :)


A person looking to use a HTTP router most likely isn't going to rewrite/fix the regexp package just so they can use this router when there are already other routers that are faster as is. Do you dispute that?


If you use a broken hammer to attempt to insert a screw, you're crazy for using the hammer, not because it's broken.


> If you use a broken hammer to attempt to insert a screw, you're crazy for using the hammer, not because it's broken.

Are you resorting to insults now ? or is the typical hate and mean spirit of the Go community ? a router isn't a hammer. And I could care less about your opinion.


I sincerely did not intend to insult you. I'm also not associated with the go community.


The comment linked is about CSV parsing being slow (the linked GitHub issue in the comment shows that the `regexp` doesn't show up in the benchmarks at all). So I don't understand why you linked a 6-year-old thread that's been revived by unrelated topics?


Regexes are the problem, because they're simply the wrong tool for the job.


> Regexes are the problem, because they're simply the wrong tool for the job.

for what job? Extracting route variables from paths ?they are the right tool for the job, only in the Go community they are deemed "wrong tool for the job". Your statement embodies everything that is wrong with the Go community. Instead of finding a solution to a problem you guys spend your time shifting the blame on "bad practices".


Wouldn't it make more sense to use something faster and simpler for most routing, and then an optional argument for regular expressions? Lots of web frameworks use that approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: