Hacker News new | past | comments | ask | show | jobs | submit | abhisek's comments login

Wow. Reminds of the old BBS era.

I'm also building a BBS era idea, called dialup.sh that uses a 100%-text-mode browser to reverse proxy locally hosted websites for people to run little servers for their pals: https://www.youtube.com/watch?v=_Bs7BoQBoBA

I personally like the idea of a modern browser that works over SSH/terminals, on its own, but i think the BBS/social, friend-to-friend, small network idea could be the thing the indie-web (tilde, etc) have been zeroing in on the last few years.

It's good to see more people building in term/text interfaces. I think of it as a reaction to super saturation of attention-grabbing content all time, all places. Nice to have some quiet, some (visual? content?) noise cancellation for the web! Especially in the era of AI generated content now too. Overload, man. Get back to BBS days not a bad idea! :)


Totally agree. Micro services unnecessarily makes thing complicated for small teams. IMHO it solves the problem of velocity ONLY when a large engineering team is slowed down due to too much release & cross cutting dependencies on a monolith. Although I see people solving with modular monoliths, merge queues and CODEOWNERS effectively.

Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset


Can you elaborate a bit on codeowners, I've not heard of that kind of solution before.

They're a way to assign ownership to individuals or teams on a granular basis, rather than at the repo-level. You can assign entire folders or individual files to people.

Here's more at Github's docs: https://docs.github.com/en/repositories/managing-your-reposi...


You just put a text file with the names of the team or developers who own the directory.

GitHub Actions by default provide isolated VM with root privilege to a workflow. Don’t think job level privilege isolation is in its threat model currently. Although it does allow job level scopes for the default GitHub token.

Also the secrets are accessible only when a workflow is invoked from trusted trigger ie. not from a forked repo. Not sure what else can be done here to protect against compromised 3rd party action.


People have been running different levels of privileged code together on the same machine ever since the invention of virtual machines. We have lots of lightweight sandboxing technologies that could be used when invoking a particular action such as tj-actions/changed-files that only gives it the permissions it needs.

You may do a "docker build" in a pipeline which does need root access and network access, but when you publish a package on pypi, you certainly don't need root access and you also don't need access to the entire internet, just the pypi API endpoint(s) necessary for publishing.


How will you define a good PM? I have been looking for this definition for a while.

In my startup experience, it seems to me the best PMs are the CTO and the early engineers who has near infinite business and user context.


Communicates well, focuses on the people, problems and solutions, not tools and processes.

Example: good PM will build a mock up of a UI, go over it in detail with engineers, then let them break up their own work. They are focused on the actual product. Bad PM will write 10 Jira tickets without any real context, assign them without discussion, then add 20 different tracking fields that nobody will use.


> it seems to me the best PMs are the CTO and the early engineers who has near infinite business and user context.

That's basically the role of the PM: to have all the business and user context and to use that to harmonise vision between the various "stakeholders" (parts of the business, users/clients, etc). But one doesn't have to be the CTO or an early engineer to have this context/skillset.


Same experience, IME founders make amazing PMs, because they care about the company and they care about people not wasting their time.

Not OP, but I can answer:

They're ok with ideas coming from someone else, check the ego at the door and listen to both engineers and customers. They make engineers work less and produce more value. Most important, they don't have a "vision", they help organize the team so the team has a shared vision built by the team.

EDIT: Also: They're not competing with the engineering manager or lead developer for some sort of leadership. They're talking with customers instead of asking sales to do it. They're working on the product aspect of tasks instead of offloading them to engineers.


> They're not competing with the engineering manager or lead developer for some sort of leadership.

Yep. Most of the managers that I've had that have been promoted from engineering have felt that their role is also tech lead, and have been poor at both being a tech lead and a product manager.


Definitely agree.

I worked one of those in the past, and the attempts at micromanagement were not only absurd but very disruptive.

I now prefer "non-technical" PMs to "half-technical".


yes! The best PMs I've worked with have been founders.

Was a PM before I went off and started my own non-tech company.

My definition of a good PM is someone who can champion both customers/users and developers concurrently, while sticking to the company's value prop and competitive edge.

In some cases, it's boiling down the needs of the customer into something achievable before sales gets in the way with over-promising and under-delivering. In other cases, it's telling the executives to stop wasting engineering's time with excessive meetings and scope-creep. It could be simply going and getting the engineers coffee. It could also be telling engineering to stop over-engineering the MVP and to simply get what needs to be done, done.

In simple terms, someone who can go to bat for any facet of the company at any time externally or internally, in order to make sure right product is being built in a timely manner that aligns with both business needs and most importantly, customer needs.


Trying to make it more like a checklist:

* Do the developers know what the customers want?

* Do the customers have realistic expectations?

If yes to both, then the PM in between is doing a good job. Bonus points if higher management is aware of that.


Exactly.

A good PM should effectively get out in front of the sales team to make sure customers/users feel heard and understood, and also to communicate to the customers/users what is and isn't possible within a given period of time.

A good PM should also know how to communicate a "no" to anyone in the business cycle from anyone else in the business cycle. Their job is effectively to be the firewall/filter from one team to another.

No, customers don't want Feature XYZ even though engineering wants to build it No, engineering can't build Feature ABC even if a customer wants it No, sales cannot promise Feature 123 to customer, especially without checking with engineering first. No, executives can't force engineering to focus on the CMO's pet project, or force sales to hit numbers if the product sucks or isn't what the market wants

And so on


So true. I have tried building from scratch so many times for specific use-cases with my own opinionated experience only to recreate the bloat over time. That’s actually good. The alternative was building something based on momentary spark of creativity that no one, not even me end up using.

I think you are only looking at Kubernetes for running and updating container images. If that’s the use-case then I guess it’s overkill.

But Kubernetes does much more in terms of providing the resources required for these containers to share state, connect to each other, get access to config or secrets etc.

That’s where comes the CPU and memory cost. The cost of managing your containers and providing them the resources they need.

> basically acts as a giant while loop

Yep. That’s the idea of convergence of states I guess. In a distributed system you can’t always have all the participating systems behave in the desired way. So the manager (or orchestrator) of the system continuously tries to achieve the desired state.


> But Kubernetes does much more in terms of providing the resources required for these containers to share state, connect to each other, get access to config or secrets etc.

This was OPs argument, and mine as well. My side project which is counting requests per minute or hour really doesn’t need that, however I need to eat the overhead of K8s just to have the nice dx of being able to push a container to a registry and it gets deployed automatically with no downtime.

I don’t want to pay to host even a K3s node when my workload doesn’t even tickle a 1vCPU 256mb ram instance, but I also don’t want to build some custom scaffold to so the work.

So I end up with SSH and SCP… quadlets and podman-systemd solves those problems I have reasonably well and OPs post is very valuable because it builds awareness of a solution that solves my problems.


IAM is complex. More so with federation and cross account trust. Not sure every weakness can be considered as a vulnerability.

In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.


The security industry, unfortunately, is awash with best practice violations masquerading as vulnerabilities

Still trying to grasp the idea of archiving messages from E2E encrypted communication system into a storage that entirely breaks the purpose of using something like Signal.

It’s like encashing on the trust of Signal protocol, app while breaking its security model so that someone else can search through all messages.

What am I missing here?


> What am I missing here?

OK, say you're a bank. The SEC states you need to keep archives of every discussion your traders have with anyone at any time (I'm simplifying things but you get the point). You keep getting massive fines because traders were whatsapping about deals

So now you've got several options - you can use MS Teams, which of course offers archival, compliance monitoring etc. But that means trusting MSFT, and making sure your traders only use Teams and nothing else. You can use a dedicated application for the financial industry, like Symphony or ICE Chat or Bloomberg, but they're clunkier than B2C apps.

And then the Smarsh (owners of Telemessage) salesman calls you, and says "your users can keep using the apps they love - WhatsApp, Signal - but we make it compliant". And everyone loves it (as long as no-one in your Security or Legal teams are looking too hard at the implications of distributing a cracked version of WhatsApp through your MDM...)

Edit: here's the install document for their cracked WhatsApp binary https://smarsh.my.salesforce.com/sfc/p/#30000001FgxH/a/Pb000...


> say you're a bank. The SEC states you need to keep archives of every discussion your traders have with anyone at any time

These records are encrypted in storage.


That is more than overly optimistic given how slow the pace of any technical innovation in finance is. The recent and not so recent issues with Citi are a good example of that.

Seems like it doesnt resolve the trust issue it just shifts it to a smaller firm with more to lose.

It definitely doesn't resolve the trust issue! I would trust MSFT a million times more than these cowboys. What it does give you is peace with your traders (who can be real divas..) - they can keep using "WhatsApp" and "Signal" and you can monitor everything

Oh wow! There are other ways to archive whatsapp messages that don't involve modified WhatsApp apks. Meta lawyers do not take kindly to modified WhatsApp apks.

> There are other ways to archive whatsapp messages that don't involve modified WhatsApp apks.

What other ways are there that don't involve WhatsApp's Google Drive backup feature or scraping the web interface?


The clean way to do it (which is how Telemessage’s competitors do it) is to use WhatsApp business APIs with dedicated phone numbers.

Most traders I dealt with want to do it on personal cell phone so they can keep contacts as they move around. Most of them are like salespeople, they know exactly how much money they bring in and successful ones refuse to do anything that will impede THEIR method of working. SEC Fines, Regulations? Those are for less successful people.

EDIT: There was another post calling them divas, alot of them act that way.


yes, that's why TeleMessage managed to sell such a hacky solution - there was a clear desire for that product

I mean if they're going to use their personal devices for this then a cracked Whatsapp wouldn't help the business anyway.

For devices the company controls they can of course use the API the above poster mentioned though


Yes it does. They convince the traders to let them install MDM and take over the phone, WhatsApp/Telegram/Signal still works, the business gets the records and if they leave the trading firm, the phone gets wiped but traders can just reinstall Store versions, log back in and everything is back.

I had scraping the web interface in mind.

ok, this absolutely reminds me of using indian whatsapp mods years ago. stickers, more features, local and portable backups... wouldn't try that as a member of the government though

Is it a coincidence that it reads almost exactly like SMERSH?

https://en.wikipedia.org/wiki/SMERSH


Probably coincidence. The founder of the company was named Stephen Marsh.

There's a point at which coincidence and opportunity meet.

Probably not. It's trendy to give edgy names to companies. See: Palintir.

You mean Palantir

And the name is not very edgy and a pretty exact mission description - it describes exactly what it grows from. Seeing stones aka cellphone data of everyone, collected, analyzed and turned into predictions for kings.

Not even pretending to not be evil is what makes it edgy.

To be honest, the good guys - turned out to be pretty embarrassing too.

The whole "everyone thinks like us" delusion bought with the surplus of a good times window distributed all around and its still willing to return to this delusional state of affairs.

The obvious plot-holes they reveal when it comes to we do not discuss nature (the bugs in the human mind are all fixable with education) and we do not discuss nurture (all cultures are equal, and equally capable - disregard the evidence before your eyes).

You don't get to juggle and drop so many balls and do not massively loose confidence!

The rule of (finger in ears) "La-La-La" is over - the problem is- the right is a reactionary mess, that has no solutions, analysis and tools to exploit these weaknesses.


> You don't get to juggle and drop so many balls and do not massively loose confidence!

You don't get to run all this circus if you don't intend to run it only as a circus. It's time to stop kidding ourselves that anybody mainstream is sincere and smart enough to move anywhere different from where they all are told to go.


Huh? If the goal is compliance, you wouldn't use something that's worse for compliance - which is why the Legal and Security wouldn't like it. If it helped with compliance, they'd love it! So the reason can't be compliance.

The goal is the appearance of compliance, not actual compliance. Check the boxes.

Sounds like you've never done compliance.

You can never control what I do on my device with the message received- I can make screenshots, or, if the app prevents that, take a picture of the screen.

The goal of signal is trusted end-to-end encrypted communication. Device/Message security on either end is not in scope for Signals threat model.


TM SGNL changes the security model from "I trust the people in the chat" to "I trust the people in the chat and also the company archiving the chat".

If you don't trust the people in your chat, they shouldn't be in your chat.


> If you don't trust the people in your chat, they shouldn't be in your chat.

I assure you, none of these people trust each other. Backstabbing is normal.

They're also likely using it to talk to foreign counterparts. Again, most of whom they don't trust a bit.

Encryption isn't just about "do I trust the recipient".


You are conflating levels of trust.

The trust level required with Signal is, "do I trust the people in this chat not to share the specific communications I am sending to them with some other party whom I do not want to have a copy".

There are many many situations where this level of trust applies that "trust" in the general sense does not apply. It is a useful property.

And if you don't have that level of trust, don't put it in writing.

TM SGNL changes the trust required to, "do I also trust this 3rd party not to share the contents of any of my communications, possibly inadvertently due to poor security practices".

This is a categorical and demonstrably material difference in security model. I do not understand why so many are claiming it is not.


>TM SGNL changes the trust required to, "do I also trust this 3rd party not to share the contents of any of my communications, possibly inadvertently due to poor security practices".

That's the same level of trust really. Signal provides a guarantee that message bearer (i.e. Signal) can't see the contents, but end users may do whatever.

You can't really assume that counterparty's device isn't rooted by their company or they are themselves required by law to provide written transcripts to the archive at the end of each day. In fact, it's publicly known and mandated by law to do so for your counterparty that happens to be US government official.

The people who assume that they are talking with one of the government officials and expect records not to be kept are probably doing (borderline) illegal, like talking treason and bribes.

No, this is not a "nothing to hide argument", because those people aren't sending dickpics in their private capacity.


If your counterparty is compromised, that still only leaks your communication with that counterparty, but not other, unrelated conversations.

> This is a categorical and demonstrably material difference in security model. I do not understand why so many are claiming it is not.

Because all it takes is one user to decide they trust the third party.

Right now you actually have to do more than trust everyone, you have to trust everyone they trust with their chat history. Which already can include this sort of third party.


One of the most popular “e2ee” communication systems, iMessage, does exactly this each night when the iMessage user’s phone backs up its endpoint keys or its iMessage history to Apple in a non-e2ee fashion.

This allows Apple (and the US intelligence community, including FBI/DHS) to surveil approximately 100% of all non-China iMessages in close to realtime (in the usual case where it’s set to backup cross-device iMessage sync keys).

(China, cleverly, requires Apple to not only store all the Chinese iCloud data in China, but also requires that it happen on machines owned and operated by a joint venture with a Chinese-government-controlled entity, keeping them from having to negotiate continued access to the data the way the FBI did.)

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

Yet Apple can still legitimately claim that iMessage is e2ee, even though the plaintext is being backed up in a way that is readable to them. It’s a backdoor by another name.

Everyone wins: Apple gets to say E2EE, the state gets to surveil the texts of everyone in the whole country without a warrant thanks to FISA.


I suppose if both you and the recipient have cloud backups disabled, then Apple can no longer view your messages.

But outside of that scenario, is there any advantage to iMessage using e2ee instead of just regular TLS?

Edit: Apparently it's up to you whether you want your iCloud backups to use e2ee. There's an account setting: https://support.apple.com/en-us/102651. Standard protection is a sensible default for regular who aren't tech-savvy, as with e2ee they're at risk of losing all their iCloud data if they lose their key.


That's an old article. According to Apple docs, Advanced Data Protection covers Device and Messages backups, which means they are E2EE.

Correct, but nobody turns it on because it’s opt in, and even if you turn it on, 100% of your iMessages will still be escrowed in a form readable to Apple due to the fact that the other ends of your iMessage conversations won’t have ADP enabled because it’s off by default.

Again, Apple gets to say “we have e2ee, any user who wants it can turn it on” and the FBI gets to read 100% of the texts in the country unimpeded.

If Apple really wanted to promote privacy, they’d have deployed the so-called “trust circle” system they designed and implemented which allowed a quorum of trusted contacts to use their own keys to allow you to recover your account e2ee keys without Apple being able to access it, rolled that out, and then slowly migrated their entire user base over to e2ee backups.

They have not, and they will not, because that will compromise the surveillance backdoor, and get them regulated upon, or worse. The current administration has already shown that they are willing to impose insanely steep tariffs on the iPhone.

You can’t fight city hall, you don’t need a weatherman to know which way the wind blows, etc. The US intelligence community has a heart attack gun. Tim Apple does not.

Separately it is an interesting aside that Apple’s 1A rights are being violated here by the presumptive retaliation should they publish such a migration feature (software code being protected speech).


And yet, it's somehow so effective that it's illegal in the UK because it doesn't let the government read everyone's messages.

TBF, governments trying to outlaw some kind of privacy doesn't necessarily mean it's a current impediment to them. They can be planning ahead, securing their position, or just trying to move the window of what is considered acceptable.

Are there any stats as to the percentage of iPhone users that enable Advanced Data Protection? Defaults matter a lot, and I wouldn't be surprised if that number is (well) below 10%.

If you are the only person out of all the people you correspond with who has ADP enabled, then everyone you correspond with is uploading the plaintext of your messages to Apple.


The number is well well below 1%. I would bet six figure sums it is below 0.1%.

Effectively nobody has it on. 99%+ of users aren’t even aware of the feature’s existence.

https://daringfireball.net/linked/2023/12/05/icloud-advanced...

You have to remember that there are something like a billion+ iOS users out there. 100 million people have not written down their 27 character alphanumeric account recovery key.


The same applies to WhatsApp. Messages backups are unencrypted by default and even the whole iPhone backup includes the unencrypted chat history of WhatsApp by default. One reason why it was a big deal for UK to disable iCloud’s E2EE backup.

There are compliance reasons where you want the communications encrypted in flight, but need them retained at rest for compliance reasons. Federal record keeping laws would otherwise prohibit the use of a service like Signal. I'm honestly impressed that the people involved actually took the extra effort for compliance when nothing else they did was above board...

> There are compliance reasons

Makes sense. But still debatable if the compliance requirements are acting against the security model or perhaps there are biggest concerns here than just secure communication.


I would not assume the archives were meant for compliance and federal records.

We also have no evidence it was in use back in March. It may be a response to that oops.

Any client-side limitations are not part of the security model because you don't control other people's devices. Even with an unmodified app, they're trivially bypassed using a rooted/jailbroken device.

Not part of Signal's security model, but trusting people in that chat very much can and should be part of the user's security model. If you don't trust them, why are they in the chat in the first place?

It's not a person in the chat, it's an account. The account is usually controlled by the person associated with it, but you can't assume that it's always controlled by that person.

Is it though? I think TM Signal is just emailing the chats to a server from the phone it's installed on.

> If you don't trust them, why are they in the chat in the first place?

Journalist? Taliban negotiator? Ex-wife?


You are conflating "trust in all ways" with "trust to receive the communications in the specific chat they are party to". The former is not relevant.

Well the ex-wife in question can be trusted to receive it a-okay and screenshot them to send to her lawyer and cops too, depending on contents. So do US government officials. Now we just know how exactly they do it.

Or with the more affordable (in terms of skills) method of using another phone to take pictures of key messages on the screen of the first one.

OK, say you're a bank. The SEC does not care what you do and is actively working to make sure nobody else does either. You never get fines and all the traders are whatsapping about deals and it's awesome. But what if the FEC decides to care in the future? Just mark all your messages as self-deleting. But what if you want to be able to read them in the future?

And then the Smarsh (owners of Telemessage) salesman calls you, and says "your users can keep using the apps they love - WhatsApp, Signal - but we archive the self-deleting messages somewhere you can hide from the SEC if they happen to change their mind". And everyone loves it (you already fired all the Security or Legal teams).


The purpose of using something like Signal is not compatible with the needs of the government or the law.

I’ve worked for non-Federal government. Your work product is not your own, and the public interest, as expressed by the law requires that your communications and decisions can be reviewed by the government you serve.

The US government created the dark web to enable espionage — its pretty obvious why they need to read their employees mail.


My guesses:

You want to talk to people who want to use Signal, but you yourself don't care about E2E

You trust Telemedia, but not Telegram, or Meta. And you want convenient archiving.


Maybe someone wanted to please the procedure of law but also had to please the bros. The result is a hack of a secure program that adds conversation archiving.

My wild speculation is that someone wants to use AI to monitor everyone’s communication.

What they should have done is write a bot that you invite to every conversation for "archival purposes". No new app.

Then you have a new attack surface. It's still missing the point of signal.

If your institution has to log the messages, they are a third party to the conversation, I would rather they were "in the chat" than the lowest bidder third party.

A chat participant bot would also be handy if you wanted to feed everything through your Ai bot at the same time.


Looks like an AI generated unverified submission. I don’t see any technical details on the nature of the vulnerability or the code affected.

> whose only goal is profit maximization

It’s really a trade-off. If you raise too much money, you have to at least on paper show growth. All the levers then are tuned for growth.

On the other hand, you risk loosing out in the market if you raise just enough to build a viable product and get initial customers with the goal of growing organically.

Market rewards the winners. Look at Wiz, they captured the cloud security market by raising huge capital and moving fast.

Open source route is probably the way to go if you want to build a product based on your foundational ideas. Helps drive adoption organically and hopefully discover a monetisation opportunity.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: