I don't know that computers can model arbitrary length sine waves either. At least not in the sense of me being able to input any `x` and get `sin(x)` back out. All computers have finite memory, meaning they can only represent a finite number of numbers, so there is some number `x` above which they can't represent any number.
Neural networks are more limited of course, because there's no way to expand their equivalent of memory, while it's easy to expand a computer's memory.
I don't understand this phrase. If I'm deciding whether to work for a company, I don't care about the ability to hold management decision-makers to account. I care only about the quality of the decisions. (I would rather an unaccountable decision maker that makes good decisions to an accountable decision maker that makes bad decisions.) Putting myself in the shoes of an owner of a company, I also have the same preference. The only person I can imagine actually preferring this rule is management themselves, as it means they can't be replaced by computers no matter how much worse they are at their jobs than a computer would be.
For me, yes, it is. I make an app for myself, and I thought about making it a server-rendered app like you suggested. But it's just so much better in my opinion to do everything on the client side because it means that every interaction has zero latency, regardless of the quality of my internet (which is often bad).
> WFH forces employers to compete. It gives a lot of power to employees, because they can [...] work fewer hours, moonlight for multiple companies, etc
Probably "working fewer hours" and "moonlight for multiple companies" has negative effects on productivity that employers would like to avoid.
I doubt it, productivity is an equation that's very complex for knowledge workers.
For example, is 80 hours of work a week more productive than 40? If you're working an assembly line, probably.
If you're a programmer, definitely not. You will write more bugs, make more mistakes, and churning out code doesn't mean much. Any monkey can write code, but writing maintainable code is hard, and reading that code and actually choosing to maintain it is harder.
Again, it depends. Maybe they have more pride in their job or despise their company less, who knows.
And I don't mean productivity per hour. Lol. No, I mean absolute.
An employee working like a dog will get less work done than one just working normally, probably. Because most of the work is negative, so it doesn't add to the work done pile, it chips away at it.
Eventually, I would think, you reach a point where an employee is less productive than no employee at all. Seems impossible to be working 100 hours a week and be getting less than nothing done, but if you're actively making the product worse or creating debt, that's how I would classify that.
I've already kind of made it clear here where I stand on this, but I gotta tell you, you really do sound a lot like management.
Do you really think your superstar programmers are well and truly doing intellectual work, the kind of work that produces business value, from the time they hit the coffee machine at 9AM to the time they grab their briefcase to go home at 5PM?
If you believe this, I think you might be interested in bringing the Bobs in to discuss making our T.P.S. reporting process more efficient. They have thoughts on coversheets.
In Deep Work, Cal Newport posits that even the most disciplined, high performers can do work that requires really focused attention for a max of four hours per day. He's a computer science professor, not exactly "management."
And these days, for a lot of knowledge workers there's a pretty strong case that anything which isn't this "deep work" can probably be automated.
So yeah if I'm paying you a full time salary I want those four hours. Without necessarily rendering judgment on what a moonlighting clause should or shouldn't look like, if I'm not getting those four hours, I don't want you on my payroll.
And you think you're more likely to get those four hours in an open office environment with distractions aplenty, as opposed to my effectively noise-proofed home office where I can actually focus?
It really depends. I believe and apply a lot of Cal Newport advice, and benefit greatly from it. But I also see in my daily life how just being close to people you work with, and (crucially) being a short walk away/floor from people in other groups, creates immense value by helping unclog processes and especially by creating new ideas and products that wouldn‘t otherwise exist.
Bullshit. When I'm in the office most of my time is spent on making sure it looks like I'm working and obsessing about if someone is standing/sitting behind me and looking at my screen or not, because I'm in a panopticum. There is no time for deep work.
First, nobody cares what you want. Second, do you pay for those 4 hours adequately, guess what if you don't? Even if you do, are you OK with 2 hours today and 6 hrs tomorrow? How about a year of 1 hour days and then a 24 hour period that fixes all the problems for last 2 years?
The Internet tough guy strikes again, as if employment is not a voluntary contract between two consenting adults. This militant attitude is always good for a laugh... hate management if you like, but if you think no employee ever worries about what their manager wants, sounds like you've never held a job.
Not really sure why I am even responding to this amazingly stupid line of discussion. I mean if you absolutely hate the idea of having a boss (I know I did) then there is a solution for that - start your own company! It's not as easy as being a badass on the Internet, sure, but you might have to look at both sides of the argument and you might even end up getting rid of that chip on your shoulder.
Let me quickly go count my years of experience, will have to use all my digits and extremities, might be a minute.
I don't think you got the point behind the comment. We do not have a good way to quantify effort, thus we ask for a fixed set of time in chairs, tickets closed, etc that's the best we can come up with.
I’ll attempt a steelman and say, no, employees are not doing deep work from 9–5, but I could see being in an office 9–5 setting the stage for a lot of deep work to be done. Moonlighting for another company I could especially see as detrimental to focus at work.
The nature of modern offices pretty much prevents deep work.
You're not going to get deep work when you pack people like sardines into neat rows of desks, where pretty much at any time someone within one row away is going to be in a meeting - conducted of course over teleconferencing software. Or some people will talk (honestly, being in the office mostly translated to chit-chat for me).
Deep work with an open office? Dont make me laugh. Please for the love of god bring back cubicles.
The steel man is that in the office you get cross team pollination organically. Team lunches, talking about an idea with another team on how to do something better as in that moment the idea came up. This happens more often in person than remote.
Does it need 5 days a week in the office? Absolutely not. 1-2 is plenty.
> Deep work with an open office? Dont make me laugh. Please for the love of god bring back cubicles.
Or doors.
25 years ago, Microsoft Redmond had a slogan: "Every dev a door".
In early 2000s, it began to be two devs per room. We all know what happened since. Open offices save facilities concrete money per seat. Productivity lost from lack of deep work is not a line item anyone knows how to track.
The "every dev a door" plus "pair programming" was shown by studies from groups like Pivotal Labs as being optimal for working code, but ... and a big but ...
Companies intentionally optimize for things other than working code. You get what you measure and they measure what's easy instead of measure what matters.
I don't expect someone to do deep focused work from 9am to 5pm.
But at the same time, I don't expect them to spend their 9-to-5 working for another company at the same time.
As a founder, who respects the 9-to-5 and supports WFH, if I'm paying for 8 hours of work, I want 8 hours of output. Not 4 hours of output, and then you working 4 hours for another job.
If multi-jobbing becomes a thing, then WFH becomes untenable because at least in the office you can be monitored.
To be fair, you're either paying for hours or for output, because I assure you you are not paying staff accurately for their output. You can of course sack someone who outputs notoriously little, but if you get output exceeding your average "8 hours of output", you shouldn't care if someone made it in 1 hour or 16, or at least you wouldn't be able to tell.
I'm using "output" as quoted in context, it's such a nebulous measure unless you're specifically buying a product.
To be clear I'm having a lot of fun being snarky here.
Like everything it's a mix.
In seriousness, I do find the labor perspective sorely and quite conspicuously lacking in these discussions, both discussions about remote work and about DEI backlash.
I’ve hired remote employees, made them come, offered stimulating work, 5% above their requested pay with mentions that I could double it in one year, but I could never get them to the smartness and clarity of analysis they had during the interview. After 6 months they were clearly winging it in <1hr a day and exhausting my team lead, who didn’t think they were moonlighting for several companies. I did: Their progress had entirely stalled and their performance was negative.
I fired both the employees and the manager. This “remote employees don’t moonlight” is a union trope.
Not enough to move the needle. 25% would move the needle.
> with mentions that I could double it in one year
They didn't believe you, or didn't after a short time working there. So it didn't move the needle.
More so if they're experienced. Similar mentions of prospects are common in interviews, and rarely followed through. You eventually learn to be skeptical of them, while rolling with it, just in case.
Also, if you might be willing to pay double their requested salary, they start realising their value on the open market is much higher than they'd previously thought, or could be with a little presentation and experience.
On the other hand, if you'd put it in the contract that their salary will double after 1 year, subject to well-defined criteria and a history of actually doing it with existing employees, then they'd believe you, and that would move the needle a lot.
From your story I speculate you were right to fire them, but you never figured our how to get the best out of them. In recent years it's possible you were subject to employment fraud, as clarity of analysis can disappear if it's a different person doing the work than the person answering interview questions.
Progress that's entirely stalled or negative can happen for many other reasons than moonlighting, and many other reasons than not putting enough time.
I've been fully remote for 5 years, partially remote for 15. Being remote removes many sources of stress for me. I don't moonlight.
The one thing that decreases my productivity, in some positions, is bad management. Of course, that was already the case when I was fully office-based.
Atlassian is a dumpster fire, they run shit engineering since about 3 years.
Give me the secret sauce to being productive with remote employees. Maybe some have found it, but apparently paying above the employee ask, offering to double the salary in case of success, sending them to conferences and spending a lot of human time with them gives me the “evil employer” category on most forums.
Yeah, I know “Treat them even better!” is, again, the word of the union guy, but in most cases, the employer has to eat a shit sandwich.
have you doubled anyone's salary? if not, it can come across as an empty promise you won't fulfill
>sending them to conferences and spending a lot of human time with them
do they want and benefit from these things? or do they distract them from their productive work?
>in most cases, the employer has to eat a shit sandwich.
not really, you were able to fire who you wanted to fire easily. it also seems that you didn't consider other factors for why the employee didn't work out. does your interview process poorly select for people who will do well in the role? are there other possible explanations for low productivity than the employee having a second job?
I don't know you and you aren't my current employer anyways but a good first step to requiring me to go back to the office would be actually giving me an office!!
The old version stayed around but (essentially) nobody wanted to use it. If they had, the forked version would be worthless. That is the difference. A cryptocurrency fork cannot succeed without the consent of the community. No one is compelled to use it the way that you are compelled to accept the decisions of a regulator.
yeah this sounds like direct plutocracy - money votes, not people.
Which I guess is a cricitism of crypto in general - if it were to be adopted widely, the rich can gang up any time on the rest of us and do an 50% vote to rewrite the votes - right now the 1% owns about 30% of wealth in the US - not a stretch to see it go to 50%
If people disagree with one particular regulation, you think it’s possible to vote someone in to fix that issue in isolation? I don’t think you have thought about this very deeply, either that or you’re completely ignorant of the political environment you inhabit.
they got feared into it by fear of being left behind. Pretending that majority can always make the good choice (even for their own benefit!) is, well, just look at state of US politics.
And it's WORSE, because there is no one person one vote, the amount of money have is directly proportional to the "voting power" in crypto currency.
> then during spikes in demand (or alleged spikes in demand) they coordinate to keep the price from dropping.
Why would they need to coordinate to keep the price from dropping during a spike in demand? a spike in demand will obviously not be expected to lower prices regardless of collusion
Yes, the industry is capacity limited so if there's a true spike in demand, prices will be high even absent any collusion. Especially if previous investment in expanding capacity has been lacking for many years.
If the industry is at capacity (which it plausibly is, especially since HBM memory is made in the same facilities) then no one can physically "undercut" anyone else. Collusion works by artificially restraining supply of some valuable good; if there was genuine collusion at play, we'd probably be seeing companies make less of the expensive HBM (to push its price even higher; note that patent and other IPR restrictions can in fact have this effect, to some extent) and more of the comparatively cheap DRAM!
>Why would they need to coordinate to keep the price from dropping during a spike in demand?
They wouldn't you're right.
But I would expect for them to follow the sorts of behavior we've observed in other markets - egg prices, gasoline prices. When a spike occurs, even if as brief as a lightning strike, they will only very slowly drop prices, when in a purely capitalistic world the price drop ought to be equally fast - suggestive that the slow drop is a mutual agreed upon collusion. After all, it's in all sellers best interest to game that "consumers temporarily agreeable to scalping prices" as hard as possible, Nash equilibrium or whatever amongst sellers. Many such cases and more vicious and brutal punishments for such behavior would serve to benefit the common man, the final point and benefit of capitalism
Of course. Price drops only really come through in response to competition.
Long term high prices invite further investment. Investment arrives and wants a quick return. Fastest way to return is to sell above cost but below market. Established players respond. Yada yada. I once met a former projector salesman who was unbelievably angry that someone, I think Acer, came along and destroyed the ~2000 AUD hard price floor that projectors once commanded, which dropped the whole market and his commissions along with it.
Even when collusion is government endorsed instead of outlawed, the same rules apply. See the bromkonvention. You need the new player willing to take the 10% margin to hurt the bottom line of the guy taking 150%.
>when in a purely capitalistic world the price drop ought to be equally fast
No the price can only drop as fast as supply and competition can catch up. For an industry with high input capex costs, thats extremely slowly. I would think some banks would be keen to take a risk on a new RAM fab based on the demands coming from AI, but also I would personally not take the bet that AI will be in this state in 5 years time. So assuming Banks and other lenders are as skeptical as I am, they wouldn't lend, or would request a bigger entity guarantee the loan.
>brutal punishments for such behavior
Brutal punishments for failing to ramp up production? Or for not lowering prices for no reason? I really dont understand
> when in a purely capitalistic world the price drop ought to be equally fast
That is not obvious. When demand is higher than supply, it is clearly good move to raise prices. But when demand is lower than supply it us not clear than lowering prices would raise volumes to compensate for lower margins.
I think they have other issues, for example, they have no FFI. I think focusing on the business is actually a pretty decent idea. Trying to make money will force them to focus on things that are important to users and not get distracted bike-shedding on things that I would if I were them (like typeclasses).
Unison is one of the most exciting programming languages to me, and I'm a huge programming language nerd. A language with algebraic effects like Unison's really needs to hit the mainstream, as imo it's "the next big thing" after parametric polymorphism and algebraic data types. And Unison has a bunch of other cool ideas to go with it too.
This isn't really what they're going for, but I think it can potentially be a very interesting language for writing game mods in. One thing about game mods is that you want to run untrusted code that someone else wrote in your client, but you don't want to let just anyone easily hack your users. Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.) There are other ways of implementing a sandbox, and Wasm is fairly well suited for this as well. But Unison seems like another interesting point in the design space.
Still on the subject of Game Dev, I also think that the ability system might be actually very cool for writing an ECS. For those who don't know, an ECS basically involves "entities" which have certain "components" on them, and then "systems" can run and access or modify the components on various entities. For performance, it can be very nice to be able to run different systems on different threads simultaneously. But to do this safely, you need to check that they're not going to try to access the same components. This limits current ECS implementations, because the user has to tediously tell the system scheduler what components each system is going to access. But Unison seems to have some kind of versatile system for inferring what abilities are needed by a given function. If it could do that, then accessing a component could be a an ability. So a function implementing a system that accesses 10 components would have 10 abilities. If those 10 abilities could be inferred, it would be a huge game changer for how nice it is to use an ECS.
> Unison seems well-designed for this use case because it seems like you could easily run untrusted Unison code without worrying about it escaping its sandbox due to the ability system. (Although this obviously requires that you typecheck the code before running it. And I don't know if Unison does that, but maybe it does.)
Indeed we do, and we use this for our Unison Cloud project [1]. With unison cloud we are inviting users to ship code to our Cloud for us to execute, so we built primitives in the language for scanning a code blob and making sure it doesn't do IO [2]. In Unison Cloud, you cannot use the IO ability directly, so you can't, for example, read files off our filesystem. We instead give you access to very specific abilities to do IO that we can safely handly. So for example, there is a `Http` ability you can call in Cloud to make web requests, but we can make sure you aren't hitting anything you shouldn't
I'm also excited about using this specifically for games. I've been thinking about how you could make a game in unison cloud and another user could contribute to the game by implementing an ability as a native service, which just becomes a native function call at runtime. I started working on an ECS [3] a while back, but I haven't had a chance to do much with it yet.
Don't believe all the horror stories on HN are necessarily representative without trying it yourself. People generally came up with these newfangled things for a reason. I personally very much appreciate having CI/CD that deploys my changes every time I push.
Anyone can publish weekly benchmarks. If you think anthropic is lying about not nerfing their models you shouldn't trust benchmarks they release anyway.
Neural networks are more limited of course, because there's no way to expand their equivalent of memory, while it's easy to expand a computer's memory.
reply