I find the topic of the morality or effectiveness of having a H-1B a little bit intractable to reason about rationally. Consider a simplified model of the system.
You have 2 countries, C1 and C2.
Scenario 1:
C1 has enough demand for 100 tech jobs.
C1 only has 50 qualified natives for 100 tech jobs.
The wages of C1 go up because there is more demand than supply.
Scenario 2:
C1 has enough demand for 100 tech jobs.
C1 only has 50 qualified natives for 100 tech jobs.
Now you put in a H1-B visa program that will pay the same as the prevalent wage as a local native.
C2 has enough candidates to fill the other 50 positions.
The wages of C1 will NOT go up because now supply matches demand.
Is Scenario 2 fair? Who gets to decide what fair is? Given the above system, I think I would argue that H1-B visa programs cause wage deflation in C1, even if it is filling jobs that would not be filled and even if the jobs paid the exact same as someone working in the native country.
I am not dogmatic about that though. Willing to hear a counterpoint to scenario 2.
Scenario 2 now has country C1 with 100 tech workers, and they got their pick of the 50 best workers from (lower paying) country 2.
Country 1 is now a better place to start a new company or expand your existing company because all the best workers in the field work in country 1. Starting the same business in country 2 will almost certainly fail.
This is literally why the Bay Area became the world’s most important tech hub and isolationism will allow (and is allowing) Chinese tech to jump ahead of the US. The government doesn’t care about losing a literal arms race, largely to reduce the political power of California. By no longer educating and welcoming the world’s brightest engineers the USA is going to be reduced to support and manufacturing roles where its large workforce will have to compete with everyone else and salaries will tumble.
1. Companies can hire overseas. There's some cost to it in terms of added friction, but if wages rise enough in C1, then it's worth the friction to hire in C2 instead.
2. Workers also consume and invest, raising demand for other jobs. Employment is not a zero sum game, especially at the macro scale.
Scenario 2 makes sense. I think the counterpoint people bring up is to just stick to Scenario1 and let the salaries go up and let people jump ships every now and then for a raise but they forget that C1 is a ultra-capitalist country.
"By a continuing process of inflation, government can confiscate, secretly and unobserved, an important part of the wealth of their citizens." -- John Maynard Keynes
But what if your codebase has to interact with leads, customers, and another 3rd party system called Metrica with leads, customers?
When you write a loop, do you now name the variable
OurPerson.Age
MetricaPerson.Age
?
What if, 3 years from now, you include another 3rd party vendor into the system and have to write code against that data and in the data they name their stuff OurPerson.Age?
Not saying you are wrong at all. Just naming things is hard and context dependent. I think that is why it is endlessly argued.
Idle thought. Not an economist.
What if the current tech scene is like how the u.s. capitalist class took power away from blue color workers in the 80s, 90s by leveraging cheaper labor in foreign countries? Now, programmers, sysadmins, system engineers no longer have leverage (real or imagined) because the owners just point to AI.
If they could do it, they would. White collar workers don't have any special status that protects them from getting the same treatment the blue collar workforce got. That said, I think it'll play out differently this time.Offshoring blue collar jobs eventually hurt the capitalist class; they lost whole industries to China. Offshoring white collar jobs is a different kind of risk: you end up with IOUs to countries that might not stay favorably disposed toward you when the power dynamics shift.
> Offshoring white collar jobs is a different kind of risk
Isn't this risk also present with offshorting blue collar jobs? I agree that both are bad ideas, but the capitalist class didn't care before and isn't exactly well known for it's long-term thinking.
I remember dialing up to a BBS in the area in 1990 that had 4 phone lines. That was amazing at the time when most BBS only had 1 line.
But I do remember downloading text files FILE.IDZ about other BBS, and reading some magazines that mentioned other BBS systems that had 32 and more phone lines but you had to pay. That seemed like it was just on another level in another part of the world that seemed like fantasy compared to the area I was in.
I have not seen tests in any code base I worked on in the past 20 years. I have noticed that there is some kind of sanctimonious demeanor to quite a few people that advocate for tests (on comment boards). I find the reactions to discussions on tests fascinating because it seems to elicit very strong opinions, sort of a "do you put your shopping cart back" kind of topic, but for programmers.
I find that fascinating, because interacting with the tests in our codebase (both Python and JS) answers a _lot_ about "how is this meant to work", or "why do we have this". I won't say I do test-driven development, at least not very rigorously, but any time I am trying to make a small change in a thing I'm not 100% familiar with, it's been helpful to have tests that cover those edge cases. :)
I am fascinated by the prevalence of wanting "tests" from hacker news comments. Most of the code I have worked on in the past 20 years did not have tests. Most of it was shopping carts, custom data transformation code, orchestrating servers, plugin code functionality to change some aspect of a website.
Now, I have had to do some salesforce apex coding and the framework requires tests. So I write up some dummy data of a user and a lead and pass it through the code, but it feels of limited value, almost like just additional ceremony. Most of the bugs I see are from a misconception of different users about what a flag means. I can not think of a time a test caught something.
The organization is huge and people do not go and run all the code every time some other area of the system is changed. Maybe they should? But I doubt that would ever happen given the politics of the organization.
So I am curious, what are the kinds of tests do people write in other areas of the industry?
what are the kinds of tests do people write in other areas of the industry?
Aerospace here. Roughly this would be typical:
- comprehensive requirements on the software behavior, with tests to verify those requirements. Tests are automated as much as possible (e.g., scripts rather than manual testing)
- tests are generally run first in a test suite in a completely virtual software environment
- structural coverage analysis (depending on level of criticality) to show that all code in the subsystem was executed by the testing (or adequately explain why the testing can't hit that code)
- then once that passes, run the same tests in a hardware lab environment, testing the software as it runs on the the actual physical component that will be installed on the plane
- then test that actually on a plane, through a series of flight tests. (The flight testing would likely not be as entirely comprehensive as the previous steps)
A full round of testing is very time-consuming and expensive, and as much as possible should be caught and fixed in the virtual software tests before it even gets to the hardware lab, much less to the plane.
Per corporate policy -- in the name of safety and legal reasons -- the most extensive public uses of AI, like vibe coding and agentic systems, aren't really options. The most common usage I have seen is more like consulting AI as a fancy StackOverflow.
Will this change? I personally don't expect to ever see pure vibe coding, with code unseen and unreviewed, but I imagine AI coding uses will expand.
The interesting bit is this: how much of what you wrote over all those years actually did what you wanted it to do, no more, no less?
This is where testing gets interesting: I took some old code I wrote 30 years ago or so and decided to put it literally to the test. A couple of hundred lines from a library that has been in production without every showing a single bug over all that time. And yet: I'm almost ashamed at how many subtle little bugs I found. Things you'd most likely never see in practice, but still, they were there. And then I put a couple of those bugs together and suddenly realized that that particular chain must have happened in practice in some program built on top of this. And sure enough: fixing the bugs made the application built on top of this more robust.
After a couple of weeks of this I became convinced: testing is not optional, even for stuff that works. Ever since I've done my best to stop assuming that what I'm writing actually does what I want it to. It usually does, for the happy path. But there are so many other paths that with code of any complexity, even if you religiously avoid side effects you can still end up with issues that you overlook.
I think you have the correct approach. Tests are signals, not proofs.
I do often hear on HN people advocating for Test Driven Development (TDD) but that I think is a different category of error. It encourages people to write to the tests, not to the spec. They'll claim the test is the spec but not understand that the spec can't be written down in full. The spec is the intent, the written spec is an approximation of that, the tests are an approximation of that.
The problem with TDD is it makes it easy to believe that by passing the tests your code is complete and bug free. It's easy to test tests as proofs.
But just because tests aren't proofs doesn't mean they're not useful. They narrow the space in which errors exist. Tests also are a form of communication. It can tell others about your assumptions, again, making the search space for debugging smaller.
Tests are useful, but no one can write tests that are complete.
The value of tests is when the fail they show you of something you broke that you didn't realize. 80% (likely more, but I don't know how to measure) of the tests I write could safely be thrown away because they fail again - but I don't know which tests will fail and thus inform me that I broke things.
The system I'm working on has been in production for 12 years - we have added a lot of new features over those years. Many of those needed us to hook into existing code, tests help us know that we didn't break something that used to work.
Maybe that helps answer the question of why they are important to me. They might not be to your problems.
How do you know you haven't unknowingly broken something when you made a change?
I think if:
- the code base implements many code paths depending on options and user inputs and options such that a fix for code path A may break code path B
- it takes a great deal of time to run in production such that issues may only be caught weeks or months down the line when it becomes difficult to pinpoint their cause (not all software is real-time or web)
- any given developer does not have it all in their head such that they can anticipate issues codebase wide
then it becomes useful to have (automated) testing that checks a change in function A didn't break functionality in function B that relies on A in some way(s), that are just thorough enough that they catch edge cases, but don't take prod levels of resources to run.
Now I agree some things might not need testing beyond implementation. Things that don't depend on other program behavior, or that check their inputs thoroughly, and are never touched again once merged, don't really justify keeping unit tests around. But I'm not sure these are ever guarantees (especially the never touched again).
I think the whole concept of testing confuses a lot of people. I know I was (and still sometimes am) confused about the various "best practices" and opinions around testing. As as well as how/what/when to test.
For my projects, I mainly want to Get Shit Done. So I write tests for the major functional areas of the business logic, mainly because I want to know ASAP when I accidentally break something important. When a bug is found that a test didn't catch, that's usually an indicator that I forgot a test, or need to beef up that area of functional testing.
I do not bother with TDD, or tests that would only catch cosmetic issues, and I avoid writing tests that only actually test some major dependency (like an ORM).
If the organization you are in does not value testing, you are probably not going to change their mind. But if you have the freedom to write worthwhile tests for your contributions to the code, doing so will probably make you a better developer.
Tests are important. Writing good tests even moreso. However, you can't do either of you don't have good product requirements and communication.
Most software has "bugs" simply because people couldn't communicate how it should work.
I think most programmers are on top of actual runtime errors or compilation errors. It's straightforward on how to fix those. They are not on top of logic issues or unintended behavior because they aren't the product designer.
Programmers just cook the burger. If you order it well done, don't complain when it doesn't come out medium rare.
I do test manually in salesforce. Mainly its because you do not control everything and I find the best test is to log in as the user and go through the screens as they do. I built up some selenium scripts to do testing.
In old days, for the kinds of things I had to work on, I would test manually. Usually it is a piece of code that acts as glue to transform multiple data sources in different formats into a database to be used by another piece of code.
Or a aws lambda that had to ingest a json and make a determination about what to do, send an email, change a flag, that sort of thing.
Not saying mock testing is bad. Just seems like overkill for the kinds of things I worked on.
One thing I notice in enterprise java software that I have to reed through and update, is that too many times, every developer just wraps everything in an exception. I do not have vast insight into all java code, everywhere, but in my little corner of the world, it sure looks like laziness when I have to dig through some ancient java code base.
I hang out with a small group of sysadmins who like to spin up the old internet stuff, like irc, gopher.
And that got me to thinking about Usenet and how a ton of software (usually pirated) and images (usually pornography) were posted to it.
And people often posted stupid stuff they said (usually because they were young and dare I say afflicted by a moment of dumb).
I think one of the problems with p2p distributed systems is how do you handle "mistakes". Things you want deleted.
What if someone accidentally posts their address and phone number?
What if you post a communication system with encryption methods, but then the government passes a law that is criminal? Maybe in some regimes that puts you on a list for arrest? Look at what is happening with HAM radio operators and Belarus...
To me, none of this raises above the idea that distributed p2p content should not be used. It is just that it has some issues.
Also, unrelated, but I think the plethora of "How does this compare to XYZ" type comments are not very helpful. It is too easy to write that kind of post, but much harder to answer.
You know, a centralized system is not immune to any of the issues you are listing here.
Whether your mistakes can be deleted is up to the operator. They can even lead you to believe your content was deleted, while reporting it to the authorities.
> What if you post a communication system with encryption methods, but then the government passes a law that is criminal
Did you post it while it was legal to do so? Yes. Are you distributing it after it was deemed illegal? No. If you are in a country with a fair justice system, you wouldn't have to worry. If you are in a country without one, they will find a much easier way to get you anyway.
In legal and public opinion distributions and authorship might not be looked at with such a technical lens, especially in a country trying to ban encrypted communications. A muddying between the two could easily be constructed intentionally, or unintentionally by ignorance of executive and judicial powers.
A large proportion of historic usenet posts are archived and remain freely available today. Didn't google end up with one of the larger commercial archives? So that "stupid stuff" is still around unless it got deleted at the time (and wasn't archived first).
New uploads to github are constantly being scanned by both benevolent and malicious actors for secrets that were inadvertently checked in. It's far too late by the time you notice and delete it.
This P2P system doesn't appear to introduce any new problems that aren't already widespread.
This just seems like acknowledging the reality. If you publish something publicly, it's very possibly forever. Maybe a reasonable solution would be for a user client to delay publishing for a time (like an email client that lets you cancel/recall a sent email for a time).
AD: We're actively working on that issue right now, making the defaults safer. We're also discussing internally how to enable revocation of content at the network level. It won't be perfect, but neither is GitHub or the likes.
> We're also discussing internally how to enable revocation of content at the network level.
Isn't that a solved issue? Or rather unsolvable. With ActivityPub there's just a deletion notification that's obfuscated so that you can't identify the item unless you already have a copy of it. What else can you do?
Right. Radicle nodes out of the standard distribution would be kind enough to delete. On the technological level you cannot do more (also not really less, funnily enough). But it would be possible to patch the code and remove deletion.
Often times I just take the "information theory perspective": You fundamentally cannot make something "more private". Once it's out, it's out. You cannot "untell" a secret. That's just not how it works.
But then other solutions also have this problem. Once I have `git fetch`ed from GitHub, or received that e-mail containing a patch on a mailing list, I have a copy on my filesystem. It's going to be pretty darn hard to remove it from there if I don't comply. Maybe you'd have to enforce some law.
In that context, it seems that people were led to believe that "removal from the server(farm)" is the same as "removal from the universe", but that's just not true.
Happy for any insight on how to approach this differently.
I am just glad some thought is being put into it. Thanks for the efforts.
I keep thinking about people putting secrets up in github. You can not really get rid of something that is out there, like you said.
But people do make a request to github to remove it. And if no one has put in the effort to copy it and republish it, it is not as "out there" as if it were still on github.
Thinking on old BBS boards on the internet. Most people will use Internet Archive to search for old dead sites. If it is not on there, it is not as "out there" as if it were on the Internet Archive.
I am thinking it is not quite as black as white as it seems. There is some kind of entropy effect.
Thinking on pre-internet newspapers. If you posted something in a fan zine in the 70s, it might have faded from existence due to lost copies, or it might be in some collector's stockpile. It might even be scanned into the Internet Archive. Or not.
No great solutions come to mind. But there does seem to be some "small" value in being able to say, delete this as it was a mistake.
Maybe, also, more education, or a warning about "beware, be extra careful, this is going to be around for all to see for a long time, possibly forever".
> I keep thinking about people putting secrets up in github.
You gave me an idea. For Radicle, we implemented a `git-remote-helper` (Git recognizes `rad://`-URIs and then wakes up the helper to handle the rest). This helper could well look at the blobs being pushed and detect secrets. Then error out and request a retry with `--force` if the user is sure.
To implement something like this, we'd not want to reinvent the wheel, so we'd want to consume some description of patterns that we should look for. And obviously we're not going to ask GitHub or some web server.
So, is there such library? In a format that is simple-ish to implement filtering for but also catches a good amount of secrets?
Like so many things in life, there are so many variables/criteria and different ways to weigh them that I do not think one can make a claim like "text based tooling is cumbersome compare to the alternatives".
What are the alternatives? I had to do a little windows shell programming when working on Chef orchestration to set up windows servers.
There was "flow" programming in WebMethods I had to work on that tried to provide a snap in place component GUI to program data transformation.
I would say that there is something limiting in all the GUI based interfaces I have had to work with. Some option you can not get to, or it is not apparent how two things can communicate with each other.
Text based options have always seem more open to inspection, and, hence, easier to reason about how it works. YMMV.
You have 2 countries, C1 and C2.
Scenario 1: C1 has enough demand for 100 tech jobs. C1 only has 50 qualified natives for 100 tech jobs.
The wages of C1 go up because there is more demand than supply.
Scenario 2: C1 has enough demand for 100 tech jobs. C1 only has 50 qualified natives for 100 tech jobs.
Now you put in a H1-B visa program that will pay the same as the prevalent wage as a local native. C2 has enough candidates to fill the other 50 positions.
The wages of C1 will NOT go up because now supply matches demand.
Is Scenario 2 fair? Who gets to decide what fair is? Given the above system, I think I would argue that H1-B visa programs cause wage deflation in C1, even if it is filling jobs that would not be filled and even if the jobs paid the exact same as someone working in the native country.
I am not dogmatic about that though. Willing to hear a counterpoint to scenario 2.
reply