Hacker Newsnew | past | comments | ask | show | jobs | submit | more vilya's commentslogin

It's so the scanner operators can tell there's nothing hidden underneath your laptop. I'm not sure whether the laptop case (or just the battery?) can actually block the scanners, or it just makes the output image harder to interpret. Either way, I guess they want to be sure...


I think that was true back when he wrote the words, but less so now. Even if you're not noticing a delay, code that completes quicker is generally code that uses less power and you can still notice the effect on your phones battery life - or on the size of your monthly bill from your data center.


Great point. Efficiency comes in different shapes. To respond, I'd say power & cost are human oriented metrics, and I take his point to be use a human oriented metric, for me it's as true now as it ever was.


Ah darn, I wrote along the same line an hour ago or so but got delayed in hitting the submit button until now. Yes, pretty much this.


All of you complaining about this proposal not being based on Vulkan seem to be overlooking the fact that Vulkan is actually quite cumbersome to use. Metal, on the other hand, is a really well designed API and in my opinion strikes just the right balance between performance and usability. If it was available for non-Mac platforms too, it would be my first choice of graphics API every time. So for me, a cross platform web graphics API based on Metal is really quite an exciting prospect - much more so than one based on Vulkan - and I applaud Apple for proposing it.


Yeah, have the people arguing for Vulkan on the web actually experienced what it's like to use Vulkan? Here's a small taste: https://renderdoc.org/vulkan-in-30-minutes.html


There are many third-party libs which make using Vulkan much easier, all the way up to Game Engines which abstract everything. That isn't a argument against Vulkan.

A good graphics API, even a web one, is one that provides good control over the GPU.. and to have sane standards which limit future implementation fragmentation (like shader byte-code instead of a specific shader language), and to have good debugging tools, etc.. That's what Vulkan is, regardless of how verbose it is in comparison to OpenGL.

I'm not saying you can just make Vulkan run on the web, but I'm certainly in favor of a Vulkan-subset (using SPIR-V as a shader base) becoming the successor to WebGL over one inspired by Metal and MSL.


I've recently had to deal with some code that did this for work (and also used operator* for dot product) and it made the equations incredibly difficult to read. Please don't make the mistake of doing this in your own code.


Using operator* for the dot product is straight up wrong, since multiplication between two vectors is well defined and used all the time.


Exactly.

Same with %: since we're talking about a number-like object, there's a clear and expected meaning for it and that is NOT cross product.


The term you're looking for here is a depth matte. It's a technique that's been in use for many years now, but of course it's only as reliable as your depth data so it's just one of the many tools in a compositor's tool belt.


I like that you guys are being open about this, but the way you calculate compensation leaves a pretty bad taste in my mouth. I very much dislike the idea that you, as an employer, are deciding what proportion of their income your employees should be spending on rent. You're also effectively saying that work done by someone who lives in a cheaper location is less valuable to the company than the same work done by someone who lives in a more expensive location. Maybe all employers do this and the only difference is that you guys are being honest about it, but still... yuck.


As someone who works remotely in a company that does take location into account, I can understand their perspective. I think there's a base value for work and then depending on the power of each dollar earned, there are multipliers to that that eventually give you your final salary. So it's "value of work" * location based spending power differences. Value of work stays constant.

That said I do think the factors used to calculate the salary can improve. In general, I love the idea of thinking of quality of life. Can a person in Sri Lanka enjoy a similar quality of life as a person in SF at least in terms of factors that can be controlled (a company can't control the quality of public transport or municipalities for example in a given location but can provide me the opportunity to purchase experiences or work around those matters).

And that matters because although rent in Sri Lanka is lower than Brisbane, buying groceries is actually more expensive. Buying electronics is certainly more expensive because of the enormous markups and taxes. Compared to a location in the US, I pay nearly triple the value of a given electronics item at times just for the cost of shipping it and then paying customs. Even travel becomes more expensive since I have much more lengthy Visa processes to go through. These numbers eventually add up and while I can save huge amounts of money by living frugally, if I wanted to live a good life supporting my wife and child, the number should be ideally 60k USD and above rather than 38k.

Should mention that Software engineers are considered to be some of the lowest level fodder in Sri Lanka and our good salaries can be something like 12-15k per year. Starting salaries would be something ridiculous like 3k USD per year (that was mine). But that's also why so many people are migrating to australia, US, and canada asap if they can.


Doesn't it bother you that you can provide the same value to your employer as someone in Brisbane, but only get paid a fraction as much for it merely because you're in Sri Lanka?

I'm a remote worker too (and I'm very happy with the way my current employer handles it). As long as I'm available at the times and places my employer needs me to be, why should they have any say in where I live or how I spend my money? I want to be able to manage my quality of life myself, not have it decided for me by someone else!


Well, re the value, I don't feel salary has ever been a great representation of value a person brings to the company. It's decent at a basic level but quickly breaks down as your value grows.

But to be honest I feel bothered but about something completely different really. My worry is for the person in Brisbane. I worry that people like myself will be seen as advantageous to hire and if it comes down to a close hire decision between me and someone in Brisbane I wouldn't want to be chosen because I require a lower salary. Gitlab does do this. I don't fault them either for that though. At the end of the day you want to save money. It's a tough conversation really. Quality of life and spending power are real things and value can actually be seen as relative when you look at how much it takes to give people equal opportunities from location to location. But this opens room for abuse and exploitation. I think remote working salaries vs location will be discussed more and more over time because there is definitely many shades of grey towards the "right" path.


> I very much dislike the idea that you, as an employer, are deciding what proportion of their income your employees should be spending on rent.

With rent as a multiplier, it's like they're suggesting 100% of your income goes to rent. It seems like a more reasonable way to take housing into account would be something like:

Salary = Base + Avg Rent

Using that formula, salary might be about $20k more for somebody in NYC than for somebody in Tucson. Using the actual calculator, it's $72k more ($117k vs $45k for senior level and average experience).


We found that rent correlates with market rates, see https://about.gitlab.com/handbook/people-operations/global-c... "Perhaps surprisingly, there was a stronger correlation between compensation and rent index than with the more general cost of living index available through Numbeo (or the cost of living with rent index, for that matter); and so we moved ahead with the Rent Index."


It might do you well to check up on the pay scales for the mid-size American cities.

I live in Minneapolis and the rates offered are laughable, really. My last apartment was on the border of St Paul, if I had lived a block away your offer would have been about 10% less.

It seems to not take into account rent diversity within a city (and which level a skilled employee would pick given the opportunity).

GitLab has really gotten my interest over the past year with both trying it myself recently and seeing you interact with folks on HN. I'm currently searching for a new position but seeing the rates make applying a non-starter.


> It's hard to refute directly because there isn't any motivation or reasoning presented, only prescriptions (the author cites their "experience" in comments), so the best I can do is point out that most of it is really bad advice.

If you want to refute it, why not say what you think is wrong with each of the prescriptions instead of just saying "most of it is bad"?

Personally I agree with most of the prescriptions but there's one I disagree with completely and another that I'd add a caveat to.

The one I disagree with is using c headers instead of their c++ wrappers. Some of the c++ wrappers do actually add value (e.g. cmath adding templated versions of abs, etc) and it's easier for me to remember a single consistent rule ("use the c++ wrappers, always") rather than a list of which ones to use the c++ wrapper for and which to use the c header for.

The one I'd add a caveat to is the one about not using anything from the STL that allocates: I think that's good advice under some circumstances, but not all. The STL containers are really useful for getting something up and running quickly, so I think it's fine to use them in that case and only switch to custom containers once allocation shows up as a hot spot in your profiling.

As a caveat to the caveat, I would add that STL classes should only ever be used as an internal implementation detail, never exposed as part of an API. This is because the implementation of the STL classes can change, causing binary incompatibility. For example, Microsoft changed their implementation of std::string between Visual Studio 2008 and 2010 (if I remember correctly; and possibly again since?); if you have a library compiled against the older std::string you can't use that in a project being compiled against the newer std::string and vice versa - unless you have the source for the library and can recompile it. Using your own classes protects you from that because it puts you in control of when the ABI changes.


> If you want to refute it, why not say what you think is wrong with each of the prescriptions instead of just saying "most of it is bad"?

That is what's wrong with them: if you follow them you'll end up writing worse code for no good reason. I would be a lot more interested in discussing the reasons the author might have had, but they don't present them, so there's little to discuss there.


I think the grandparent post is talking about decoding to RGB with a full 32-bit float per channel, which is 12 bytes per pixel rather than 8. The high precision is needed for HDR and for the extra processing you have to do to the pixels after they're decodeed - motion compensation, gamma correction, etc.


My opinion, as someone whose code got broken by a library micro version update just yesterday, is that the responsible thing to do is add a new function with the new args instead of changing the old one. Change the implementation of the old one so it calls through to the new one with suitable parameter values & and mark the old version as deprecated, but please don't remove it until the next major release!


That's what I do in most cases. But sometimes there's just no option.

The particular case that I was referring to when I said I decided to put off changes because I didn't want to bump the version number was actually a stylistic change, renaming a method from `parse(with:)` to `parse(using:)`, in order to better match the Swift 3 naming conventions. Normally I would have just marked the old method as deprecated, except a compiler bug means that if I do so, any code using trailing-closure syntax fails to compile (https://bugs.swift.org/browse/SR-3227). So it's literally impossible for me to rename this method in this manner without deleting the old method entirely (which is a breaking change). But I just bumped the major version number recently when Swift 3 was released, and I didn't want to bump it again shortly afterwards just because I didn't consider this method name during the Swift 2->3 migration.


Fair enough, I understand the desire not to bump the major version in this case. It does still kind of suck for users of the library though.

The case I was talking about was the OpenVR library, which changed the names for some of its enum values in the upgrade from 1.0.4 to 1.0.5. The change was documented in the release notes and it was straightforward to fix our code, but now we have to document that we require at least v1.0.5 and everyone building our code has to update their copy of OpenVR and so on. There are knock-on effects, is what I'm trying to say.


I love my Aorus X3: http://www.aorus.com/Product/Features/X3%20Plus%20v6

Portability, power and build quality are all excellent. Battery life isn't up to MacBook standards, but you get a good 4 to 5 hours on a single charge which is still fairly usable. The trackpad isn't as nice as a MacBook either, but it's good enough. Everything else about it has the MacBook Pro completely outclassed in my opinion.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: