Hacker News new | past | comments | ask | show | jobs | submit | mrhigat4's comments login

Android should really use a modern kernel. All the forking mess involved in Android updates is a terrible problem predicated by the lack of generic drivers on mobile devices.

Copperhead[0] has been working to apply security patches to the kernel for some time and PostMarketOS[1] has an eventual goal of using the mainline upstream kernel. Really pulling for PMOS.

[0]: https://copperhead.co/android/

[1]: https://www.postmarketos.org/


Yes, this was my first thought as I was reading the article. A better title for it would be "Backporting Modern Linux Kernel Features to Our Really Old Kernels Instead of Doing the Right Thing and Keeping Up To Date".

I do -- I really do -- appreciate that version churn is difficult, and the Linux kernel also doesn't make it easy since they don't guarantee any stable internal APIs, but they're also adding a lot of work on themselves by having to maintain their own kernel trees that diverge significantly from mainline. They're also at the mercy (to some extent) of many chipset manufacturers and whatever they've chosen to base their efforts on.

A quick look at some kernel release timeframes from the versions they mention in the article:

    4.3  - 11/2015
    4.4  - 02/2016
    4.6  - 05/2016
    4.8  - 10/2016
    4.10 - 02/2017
The only kernel out of that list I'd unquestioningly accept as it being unrealistic to upgrade to for Oreo is 4.10. 4.8 might be a stretch since I'm guessing they'd already branched internally for Oreo by then, though they likely had a month or so of RCs that they could have used as a base before that. There's certainly risk to basing your work on a newly-released (or soon-to-be-released) kernel, but given the general high quality of kernel releases, I imagine that'd be pretty far down on their list of risks. Regardless, 4.6 or 4.7 would be entirely reasonable to use as a base, and since they own the conformance test criteria, they could also require that all their vendors use that as a minimum version.

And yet, they are backporting some features as far back as 3.18, which was originally released in December of 2014, and, while it was designated a LTS kernel, it, at this point in time, has moved into end-of-life status. And we wonder why Android security is a nightmare.


Main reason is that there is a lot of hardware that never got binary blobs / drivers updated for newer kernels.

We're talking input controllers, wifi, bluetooth, nfc chips, gyroscopes, amps and half a dozen other parts that never attempted to have drivers mainlined in Linux kernel.

It's a chicken and egg problem - they won't make updated blobs until Android doesn't include newer kernel. Android won't use newer kernel because that would block 2/3 of the phones from being upgraded because of Broadcom bluetooth chips alone - Samsung might decide they are better off forking Android than waiting for Broadcom.

Edit: Sony has a bunch of phones that can run on 4.4 kernel: https://developer.sonymobile.com/open-devices/


Yes, and I addressed that in my comment. Google, however, can effectively do whatever they want. If they require kernel X, and a manufacturer doesn't support it, they'll either get their shit together, or they'll get left behind. I bet most of them do enough business supplying parts for Android phones that they'd get their shit together. And it' not hard! Writing an initial driver for some hardware might take a lot of effort, but keeping it up-to-date as new kernel releases happen will not, at least in the vast majority of cases.


I expect if google had that strength in position, they would. Perhaps that would lead to more fragmentation and less vendors releasing the latest android, leading to a highly dominant maker squashing the others and then having a stronger position in negotiating with google.


Perhaps it would be more desirable to reduce dependency on blobs? What can we do to encourage manufacturers to release source for their hardware? I assume they care about selling hardware and the firmware is just incidental?


Even when full source is available it doesn't really solve the problem. Many of the drivers provided for android socs are very poor quality and would not be allowed in the kernel. Typical problems include not using linux conventions for config parameters(device tree) and duplicating large portions of existing kernel functionality.

Not getting into the tree is a problem because kernel interfaces change all the time. When someone changes a kernel interface they are expected to update all of the affected code, but out-of-tree drivers don't get that.


> If they require kernel X, and a manufacturer doesn't support it, they'll either get their shit together, or they'll get left behind.

This leaves Google with a version of Android that does not run on anything. There are less than a handful of relevant SoM manufacturers that are capable of delivering consumer grade SoMs capable of running hardware accelerated Android; Google can not alienate these.


Apple seems to have no problem keeping drivers/blobs for their hardware working when they release new versions of iOS. Sure, they do have the advantage of tight control over their hardware and core software, and a vastly smaller number of pieces of hardware to target, but in that way they're not that much different than any random Android vendor, hardware-wise. Sony (for example) is perfectly capable of only choosing vendors that can keep up with kernel versions, or at least vendors that will be open enough with them (not even with the public, just Sony) so that Sony can hire a software team to keep things up to date.

But they don't care enough about this sort of thing (unlike Apple), and no one (such as Google) is forcing them, so it won't get done unless they see an economic upside.


For sure Google can do it, what would they do, sell handsets with their own OS, based on a fork from GNU/Linux?

It has worked quite well for those that tried.


What would they do? Sell handsets with a years-old Android, of course. Experience shows that the average customer doesn't give a shit about Android versions.


Which in such scenario wouldn't be able to talk to Google Play Service servers any longer, if Google was actually serious about doing it.


It also leaves those manufacturers without anything to put on their hardware.

I would think they would start to take things more seriously at that point.


I'm insure about who has the upper hand, but I feel like SoM manufacturers know what they are doing and are where they are based on merit whereas Android is there because it was available when it mattered and gained momentum, not because of any technical merit. Android as a developer ecosystem is a train wreck. I have better tooling for deeply embedded bare metal platforms than I have for Android userspace applications.


It's not like more than 1% of android phones that have shipped to date will actually get an android update to Oreo anyway so why not leave them behind and update the kernel?

New phones need new drivers which must support the new kernel, period. I don't see what the big deal is. You aren't getting Oreo on your old ass Samsung Galaxy S2 anyway.


Nougat runs fine on a Galaxy S2 with Lineage OS. I would not be unexpected for them to port Oreo eventually too.


GP's point was about OEM support for newer versions of Android, though; I can't imagine that third-party ROMs on old devices are even slightly a factor in Google's hesitation to upgrade to newer kernel versions


Most manufactures don't update their phones to newer version of Android anyway. I think Google should bite the bullet and upgrade to the newer kernal or atleast inform now that the next year Android will be on the latest kernal so hardware manfacturers should upstreaming their patches.


The Galaxy S8 runs 4.4, and has since release. Oreo the AOSP uses the 4.10 headers. But ultimately any vendor can integrate with a wide variance of kernels depending upon their needs.


If they are backporting features well, they're approaching kernel development very similarly to the datacenter.

As an example, the standard server at $large-dayjob-server-farm runs on RHEL 6, with RHEL 7 (the latest release) being used for new machines. Both are still supported for quite a few years into the future. Let's look at their kernel versions: (per https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux)

"6.9, also termed Update 9, March 21, 2017; 5 months ago (kernel 2.6.32-696)"

"7.4, also termed Update 4, August 1, 2017; 28 days ago (kernel 3.10.0-693)"

Upstream release timelines: 2.6.32 - Dec 2009 3.10.0 - Jun 2013

Both of these upstream releases are YEARS older than both "current" and what many Android devices are using today - we'll have machines running some patched version of 2.6.32 a decade after its initial release!

So there's obviously a difference in quality of backports and updates here and some grey area in between. I can't with a straight face say that either Red Hat or Google are right or wrong in their approaches here, it's just very different than the frequent-release model that some people are used to.


That's a very good point.

My understanding, though, is that RH etc. do that mainly for stability reasons. Their customers don't want latest-and-greatest, they want small evolutions of the stuff they know works, with only bug-fixes and must-have new features.

Of course, that comes with downsides too: build your infra on RHEL 6.x, and then once 6.x becomes end-of-life, upgrading to the new latest release is a huge undertaking.

I think RHEL is doing the right thing because it's what their customers want; I would argue that in many cases their customers are doing the wrong thing and should make more-frequent, smaller upgrades rather than one giant upgrade twice a decade. But that's certainly open for debate and reflects only my experience managing infra ;)

With something like Android, I'd expect the pendulum to swing a bit the other way: you don't want bleeding-edge or unstable, but you probably do want something quite a bit more up-to-date than something that RH pushes out. To use your example, I'd say releasing a phone in 2017 based on 2.6.32 (released nearly 8 years ago!), even with 696 patch releases, would be unacceptable.

But yeah, where do you draw the line at acceptability? I personally think you shouldn't be going with a kernel release whose series was released more than a year or so before you start development (assuming getting to a release is going to take 6-12 months from that point), and ideally you just take whatever is the latest stable when you start development, if that's possible. Sure, opinions differ, but that's kinda the point... this is my opinion :)


The comparison isn't fair you as a user of RH, expend an effort to upgrade and the benefits are often not clear so you don't

RH as a vendor has to backport because their users demand a long supported and modern-ish kernel

Google spend time and effort to backport and this story is discussing the benefits vs putting that effort in upgrading.

Your analogy isn't fair, thou obviously android's customers here are the device makers and they are tilting the balance towards backport


Agreed, but let's point fingers at the whole stack if we want this to happen.

Treble [0] will finally add a HAL to the base system, so updates to the kernel and drivers should in theory be easier on devices that ship with 8.0 (of which there are exactly zero so far). So at least Google's on the right track.

But Google doesn't create the drivers and Google doesn't ship the board support packages which vendors build upon. And so we come to Qualcomm, Samsung, Mediatek, and whoever else is shipping proprietary drivers for their SoCs and radios, and who don't provide binaries for new ABIs.

The workaround is libhybris, which Ubuntu Touch, Sailfish, and now PMOS are using to support drivers built against Android kernels and its userspace, but that is so fraught with issues that it can't possibly be supported by Google or the vendors.

So we have Android 8.0 shipping with a 4.4 minimum to support the lowest common denominator of BSPs, and it kind of has to be that way until some market force changes the landscape. I don't have high hopes for either of the projects you mentioned, sadly.

[0] https://source.android.com/devices/architecture/treble


Agreed, it really is a shared problem. It seems like a kind of "collective action problem" [1], where coordination between different parties is required - but they all have different interests and they're not all interested in splitting the costs.

Google is in a difficult position. In order for Android to catch up to the iPhone they needed to encourage a wide consortium of vendors to adopt and sell Android hardware. Part of the way they encourage vendors to sell Android phones is by being very permissive about how vendors install, update and use Android. This helps Android by growing market share.

Unfortunately, it leads to a lot of variation and fragmentation in devices, which in the long term can hurt the Android ecosystem. I guess there's a balance that Google is trying to strike between being too permissive and too restrictive in how they work with vendors.

[1] https://en.wikipedia.org/wiki/Collective_action#Collective_a...


It worked for Microsoft to impose hardware designs on PCs, the big difference was that OEMs did not had the source code of MS-DOS and later Windows available to them, to do whatever they felt like it.


OEMs can't do whatever they like with Android outside of China PR. They need Google play services.


Yet Google decides, sadly for us, not to use that as enforcement for the updates.


Indeed. Google has the power, and that excuse that Android is open source and therefore Google can't impose anything on OEMs is getting a little old and tired.

If Google would have promoted Android the way it did with Chrome OS (and the open source Chromium OS), it wouldn't be in this situation, and we'd be getting updates often. But I guess hindsight is 20/20. I still wish they did more about the support of Android devices throughout the ecosystem, not just for the highest-end devices.


> we come to Qualcomm, Samsung, Mediatek, and whoever else is shipping proprietary drivers for their SoCs and radios, and who don't provide binaries for new ABIs.

Vote with your wallet.

The librem 5 mentioned this week uses a 'liberated' i.MX6 chip because they want a phone with an upstream kernel. Technology wise, it's a little long in the tooth, but they're emphasising it's possible to not reward vendors for bad behaviour.

With RPi providing resources to de-blob and upstream the Pi, perhaps some entrepreneur will get behind a crowdfunded Broadcom based phone, with work underway on VC5.

http://phoronix.com/scan.php?page=news_item&px=BCM7268-DRM-W...


Yes, they should use modern kernels but that has not happened yet. I am appalled by the practice of keeping customers on old software and sometimes even without proper security patches.

I hope regulators (EU in particular) will act on this and require manufacturers to provide support for products for a reasonable amount of time.

Devices should be clearly labelled with a "best before" date, until which the vendors should provide (at least) security updates. Right now the situation is unbearable, you buy a phone for $200 to $700 and you don't know how long it is good for. A technically oriented or security minded person can deal with the situation but almost everyone has a smartphone. Billions of unsecure devices out there isn't good for anyone.

It shouldn't be unreasonable to assume that a smartphone should be good for 5+ years.


Place the blame squarely at the feet of vendors like Qualcomm - they won't update the BSPs.


The big issue with this is the – afterwards you always know better – completely stupid decision of Torvalds to never support a stable API for kernel modules, or drivers.

A similar issue happened recently between AMD and the kernel maintainers (AMD wanted a stable API for GPU drivers that would allow the same drivers to run on Windows, Mac, desktop Linux, and Android; and AMD had already built this API and was willing to maintain it).

In the end, this can only be solved if the Linux kernel gets a stable driver API, and with Oreo, Google added exactly that for their own fork of the kernel.


> The big issue with this is the – afterwards you always know better – completely stupid decision of Torvalds to never support a stable API for kernel modules, or drivers.

It's not stupid. APIs need to change eventually. Even famously backwards-compatible Windows has its fair share of API changes. And when public APIs change, you can either drop support for the old one (thereby also dropping support for old hardware), or provide a compatibility mapping from the old to the new API (if that is even possible).

Linus knows that he does not have the manpower to do either. All he can realistically do is only support the current API, and require devs who change the API to update all usages inside the kernel source tree when doing so.


There’s several problems with this.

(a) all drivers live in kernel space, even the sketchy drivers you have to download from NVIDIA

(b) if a driver wants to be included easily, it needs to become part of the kernel source tree, but that can only happen if Torvalds and his maintainers get full control.

(c) the API is broken with every release

As a result of all of this, the kernel maintainers refused even in any way to cooperate with AMD on their open driver (AMD needs a stable layer at some point to run the same additional functionality on windows, mac, and linux – the alternative is no linux support), we get ancient drivers on Android, with Google building their own HAL, and more shit.

At the same time, the syscalls, where maintaining a stable API is completely irrelevant and could simply be done via a small userspace library that you call to instead of doing actual syscalls, and which also massively would improve security if the translation between old and new syscalls would happen in userspace, is the one thing Torvalds maintains in the kernel.

The decisions made by Torvalds are reckless, massively hurt security and usability of open source, and the entire concept of open source ("we’d rather have only a proprietary AMD driver than an open AMD driver that relies on an HAL").


I have doubts, but I've been waiting for a phone like this for a long time. I hope it works out and gets enough funding. I'm glad they didn't make compromises on the OS, hardware switches, etc. I wish more companies would cater to passionate yet non-mainstream markets. It's crazy a truly hackable linux phone doesn't exist today.


I support the whole idea, but I think it's premature.

The Riot app in my experience has a pretty unwelcoming UI/UX experience and is still insanely buggy. Things like Jitsi integration, widgets and a phone partnership should be after a solid, stable 1.0 MVP IMHO. Encryption is still opt-in and beta.

So super supportive of the environment, the momentum and a native matrix phone partnership is the right move eventually, but please get it stable, fast and polished first before branching out too far.


> Encryption is still opt-in and beta

This is the part that concerns me.

https://whispersystems.org/blog/the-ecosystem-is-moving/

Moxie lays out a challenge to federation enthusiasts to prove him wrong that federated chat can be secure and have good usability. I would respectfully note that Matrix seems to be responding to the challenge with a chat client that is neither. Instead they seem to be doubling down on federation and integrations. Usability can be fixed, but the federation and multiple clients makes it a challenge. Security on the other hand is, again, concerning because it feels tacked on. It's not on by default because, again, it makes it complicated when you have multiple clients. Last time I tried, you could put in your password on the web client (browser encryption!) and join an existing conversation, and see all future posts by the people in the conversation, because suddenly they've accepted your new public key. I had to dig a little bit in the configs to find the current public keys for the clients in the conversation. Either they have to make a UX-friendly way of warning everybody that there's a new client, or accept that stealing an account password will let you snoop on conversations. I really appreciate their enthusiasm, and I hope someone gets federation right, but it just seemed like a mess.

By contrast, I think that Signal recognized that you can work around the security vs usability tradeoff by trading off on a third vector: feature set. I think that we won't get a federated system until someone heeds Moxie's warnings and does some careful, creative thinking.


> It's not on by default because, again, it makes it complicated when you have multiple clients

This is simply not true. The reason it's not on by default is because we're still developing it and it's in beta. It's not tacked on; we've designed in E2E from the outset - but implementing it well in a decentralised manner is a huge amount of work; probably 5-6x more than in a centralised system like Signal. We're not going to enable it by default until we are 99.999% sure that it won't cause regressions over the non-e2e client.

> Either they have to make a UX-friendly way of warning everybody that there's a new client

I think you must have tried it a (very long) while ago - we've had the UnknownDeviceDialog since February. It looks like this (https://matrix.org/_matrix/media/v1/download/matrix.org/mOOj...), and warns you every time a new device is added to the room, and gives you the option of blacklisting it from receiving your messages if you don't trust it.

Now, totally agreed that this UX is ugly and needs work, but this is NOTHING to do with the decentralised or federated nature of the protocol. It's simply that we currently are very resource constrained currently for working on web front-end issues.

That said, if your ONLY priority is security, then Moxie's "the ecosystem is moving" probably has a point. After all, in an open ecosystem like Matrix, it's possible someone will fire up a buggy/malicious client and inadvertently compromise a room. However, if you value freedom as well as privacy, Matrix or OMEMO are basically your only choices.


> This is simply not true.

Alright, you make a convincing case that it's not tacked on and I'll give you the benefit of the doubt here. Unencrypted basically is just "dev mode".

Very glad to hear you patched up the hole with the unknown clients. Perhaps I'll try again down the line.

> Now, totally agreed that this UX is ugly and needs work, but this is NOTHING to do with the decentralised or federated nature of the protocol.

Well you have to design that dialogue that tells you new devices came on, right? And you have to deal with cases where some people accept a new client and some don't. And you have to explain to lay people what the heck an IRC bridge is. And so on. My point is just that you have a lot more things to explain to users, so you have a lot of UX work ahead that you wouldn't otherwise have. So I don't think it's unrelated. (Aside from the issue of explaining things, it's not particularly ugly actually, from what I remember).

> it's possible someone will fire up a buggy/malicious client and inadvertently compromise a room

I appreciate that you acknowledge that. It signals that you have things in perspective.

I actually think some people do consider freedom to be a security issue. They don't want Signal servers to go down, or get into malicious hands.

Also: Nothing strictly stops people from using rogue Signal clients either. There's just social pressure deterring it. By that same token, perhaps you could use social pressure to deter developers from lying about what client they are, and then have a vetting process for secure clients. And then warn users when an insecure one comes on. (And perhaps count web as "not recommended for especially sensitive discussions")

Anyway, thank you for addressing my points and accepting critique. I do hope to see you succeed.


> After all, in an open ecosystem like Matrix, it's possible someone will fire up a buggy/malicious client and inadvertently compromise a room.

Isn't this a case for building in some kind of system whereby clients can be signed and have their signatures revoked by their creators or, for lack of a better word, ostracized by the wider community? Sort of like a web of trust model, but for clients, not just users, to make it more clear when somebody is joining with an untrusted client and perhaps allow moderator control over whether to allow untrusted clients to join.


Possibly, but this is heading into seriously DRM territory. one would need to be running the app in some kind of secure enclave that could attest to the authenticity of the app (e.g. via SGX on Intel). There's something a bit unsavoury about saying that "only truly official signed apps are allowed to participate in this open network", and it gives a huge amount of power to those responsible for the secure enclave/trusted computing stuff. (There's also the approach that djb mentions in https://twitter.com/hashbreaker/status/732912508089032706)

It's possible that just relying on social mechanisms may be enough to discourage people from running known evil apps (similar to educating users not to install malware today, or do trusted operations with cybercafe computers, or whatever). Effectively, the verification process when going and explicitly trusting a new device needs to explicitly prompt the user to consider where on earth this app came from, and if it should be trusted.

The only alternative is really DRM, which just feels wrong.


>There's something a bit unsavoury about saying that "only truly official signed apps are allowed to participate in this open network", and it gives a huge amount of power to those responsible for the secure enclave/trusted computing stuff.

Maybe it's a bit naive, but isn't that what federation is supposed to solve? People who are more security-paranoid can forbid clients which don't have the highest security certification, and operators who aren't so diligent will be fine with signed clients being run on untrusted hardware.

I mean... is there any open-source software being developed today which enforces key security in secure hardware enclaves? Verifying the GPG signatures on binary packages is "good-enough" for most operators. Build reproduceability will help to further reduce trust of unverified hardware.

It seems to me the job of the protocol, and baseline/recommended UI/UX, is merely to help users make informed decisions. Security is a spectrum, and if signed clients improve security (while not fraudulently representing itself as perfect or near-perfect security, if it were running on trusted hardware), then that's a net benefit to the open network.


I may be missing something, but how do you prove that an app is running a trusted codebase? I know of no PGP clients for instance that sign messages to try to prove that they were sent from a trusted app (as opposed to a trusted user). The only way I know of to do this would be to hook into a trusted execution environment of some flavour like Intel's SGX or Apple's Secure Enclave, to let effectively the chip vendor sign off that you are running an official app installed by official means. You /could/ do this, but you are putting a lot of trust in the secure enclave implementation and those controlling it, and essentially putting all your eggs in one basket. It might also lure users into a false sense of security: just because a user is using an appstore signed copy of an app doesn't mean that app is actually trustworthy or bug free. And it would also artificially discriminate against legitimate apps which aren't part of a trusted execution environment, which seems dangerous - and effectively promoting DRM at the expense of FOSS.

This certainly needs more thought :)


We're working as hard as we can on improving Riot's UI/UX and getting crypto out of beta. As others have said, partnerships like this help fund the team to get the core stuff in place. After all, this project (assuming the campaign is successful) is 2 months away anyway.


What's the value proposition of building a whole new client? If your goal is to give people a client for your cool new protocol, why would you waste any development resources on squashing fiddly UI bugs of your own doing, rather than, say, providing a reference implementation by forking the Signal app and swapping out the pieces until it's talking Matrix instead of TextSecure?

I see this mistake constantly. The ring.cx folks did the same thing. Inevitably, everyone ends up commenting how poor the app is—some of them even saying how much they'd like to be able to use it and would if it weren't for the UI. It wouldn't be so silly if there weren't plenty of essentially ready-made solutions for the problem.


1. Signal doesn't have a native Linux app (other than Electron)

2. The hope is to piggyback on one of the existing native Matrix SDKs or clients rather than write a whole new one.

3. One of the biggest complaints we hear about Matrix is that there is no native desktop app with parity to Riot yet. So this is our chance to fix that.


> Signal doesn't have a native Linux app (other than Electron)

How is that an explanation for directing (apparently limited) resources into riot-android?

> there is no native desktop app with parity to Riot yet

Huh? The comparisons I made in my comment were to Signal and Ring, which are predominantly mobile messaging apps. The person you originally responded to was discussing Matrix on mobile. Why do you keep mentioning desktop? If there were any confusion, this is a thread about an announcement for a new smartphone.


Riot/Android doesn't run on Linux, so directing limited resources into it is fairly irrelevant. The point of this campaign is to fund us to work on a native app which can benefit desktop and handset users alike - whilst also supporting the core team so we can also work on the React, Android and iOS SDKs which power Riot. I keep mentioning desktop because this is a significant benefit of the campaign.

The idea of somehow ripping out the core of the Signal or Ring apps and trying to bolt their UI onto Matrix SDKs is an interesting one, but in practice both protocols have significant differences to Matrix, and at best this would be a pretty big impedance mismatch. Of course, someone in the community is welcome to try to do it. Meanwhile, we'll keep plugging away at trying to make Riot kick ass (via the underlying Matrix SDKs), and add a native app to the pantheon too :)


> this is a thread about an announcement for a new smartphone

The smartphone in question runs Debian.


I think the whole point is elsewhere - get the UI/UX right. People forgive many things, but bad UX is not among them.

That said, I am excited about this - I love idea about pure OSS phone with hardware kill switches for mic and camera!


Matrix is really struggling on funding so they need these kinds of partnerships to allow them to continue making the software better.

For those who are reading this and would like to help out please see the following blog post: https://matrix.org/blog/2017/07/19/status-update/


>The Riot app in my experience has a pretty unwelcoming UI/UX experience and is still insanely buggy.

I've setup my own Synapse server on a VPS and have my extended family of 12 people including an 85 year old grandmother using it on an iPad. It gets used daily for chat and picture sharing. I've come across a few minor problems on iOS, but overall I don't agree with that characterization.


I think, homeserver software is also still somewhat immature. I'm waiting for Dendrite.

I'm evaluating Synapse (as all the other servers say they're pre-alpha incomplete) for the last week or so, but one immediate issue I had is that when I had set it up, on my home low-power NAS it required more than 15 minutes (and 0.7GiB of memory out of 1GiB I've allocated to the container) at fully-saturated single CPU core to join #matrix:matrix.org. And it took no less than a minute to join a room with just 200 participants. Sure, that's just for the first time, but it still feels way too resource-heavy, compared to other chat systems.

The machine has 10-year old Atom 330 CPU - which makes it an ancient relic, but hey, it has more than enough power to run XMPP (w/various transports), mail and web servers, and it just sits on a shelf in a kitchen, with a barely audible humming.


Synapse's readme does try to spell out that it requires at least 2GB of RAM and a recent CPU. It is absolutely resource heavy, but not showstoppingly so in general. The comparison with XMPP is dubious as the protocols are completely different: it's like comparing a local filesystem with a distributed database and complaining that the DB is slower.

That said, Dendrite should improve things a lot; we should have more stats in the near future but it seems to idle around 150MB of RAM and should run much better on ancient hardware. We are not bothering optimising synapse much further in favour of focusing on finishing Dendrite. Needless to say, we expect Dendrite to be finished well in time for the Librem 5 to go live, 18 months from now!


You really should not upload other people's complete works while under copyright.

Edit: Nice License

> Copyright (c) 2017 Zack Thoutt


That's a shame. I really like the React interface, but would never build a company with it at the core just on the principle. Revoking on first strike is super defensive and just plain slimy.

If there were something like create-preact-app (not forked from create-react-app) I'd be all about it. With special decorators, dogmatic separation of templates, styles and logic, and mutability I don't think I can get aboard the Vue.js train though I'll admit I haven't dove deep yet.

The worst part is how Facebook seems to be asking other big players to follow their lead in adding kill switches to FOSS. It'd be one thing if it were just React. Devs shouldn't let this become common practice.


> - Get an app to block them

I'd soon change my number before resorting to most apps. Unless it's like uBlockOrigin where I just feed blacklists into it, I'm not really okay with giving an organization besides my service provider my call history. Read Nomorobo's TOS sometime, it's a doozy.


This is how the call blocking API on iOS works. The blocker app can only provide a static, pre-set list of numbers to block to the OS, and that's it. It has no access to call history, awareness of calls being received/made, etc. The OS handles all the blocking, referring to the blacklist the app provided earlier, and provides no feedback to the app itself about this.

Of course, this means that call blocking apps have less features than on Android. For example, apps can't dynamically look up a number when a call is revived and make an on-the-fly decision. This is in keeping with iOS' philosophy of "privacy/security over features", vs. Android's "everything is completely open to developers, for better or worse".


It also means that the ultimate call blocking mechanism I've always wanted isn't available on iOS: Only allow calls from known numbers in my address book. Frustrating, because to achieve such a concept I've had to resort to DND mode in iOS which then also blocks all push notifications.


If you could highlight some of the more onerous claims in the TOS, that'd be appreciated. I've thought about using Nomorobo.


They lay it out in English before laying out the legalese here:

http://www.nomorobo.com/pages/privacy

Some highlights from the legalese:

> 1.2 Data Privacy. You understand and agree that some of your call information (including, but not limited to, a log of all phone calls made to your subscribed phone line(s) and any requested additions to any customizable phone number blacklist or white list) may be viewable by you, the Company, and by any other person having a phone line subscribed to the Nomorobo Service through the same user account as you.

I don't know that that is actually possible on iOS, though, as iOS doesn't give the call log to the blacklist provider.

>1.3 Contact. From time to time, the Company may need to send e-mails, in-app messages, and/or push notifications to you and automated voice calls and/or text messages to all phone lines that are subscribed to or otherwise using the Nomorobo Service.

Could be innocuous now and be much more annoying later if they decide to change their business model.

>Binding Arbitration. If the parties to the Agreements do not reach a solution through the informal resolution process described in Paragraph 6.3(ii), then any controversy or claim arising out of or relating to the Agreements, shall be settled by arbitration administered by the American Arbitration Association (the "AAA") in accordance with its Commercial Arbitration Rules, and judgment on the award rendered by the arbitrator(s) may be entered in any court having jurisdiction thereof or having jurisdiction over the relevant party or its assets.

Must go to arbitration if anything goes wrong and you want to sue.

Overall, not the worst I've seen. There was another one that you had to grant permission to post to your Facebook wall and/or Twitter account. I don't recall which one it was, though.


I don't want to stream, which puts me in the minority. I want services similar to GOG.com or Steam. I want to buy something digitally, have it stored remotely, but allow me to pull it down locally and have it so I can take it elsewhere if I want. That way old, unpopular or niche media can exist on the platform forever and I get my content on my devices when I want it. No fancy monthly subscription with a subset of content, I'll buy each piece of content and pay full price for it.

I realize this is not the 'vision' Hollywood has for their media and most people just want to stream content and not own it anyway, but all of this is very anti-consumer.


If you have Google Play Services installed, it will try to use it. If you don't, you can install the apk from here https://signal.org/android/apk/ which should auto update. Their permissions list is still kind of whack and everyone else on android except you is going to have Google in between every message so...


I use pass and love it. It provides a lot of flexibility. To fix the "website metadata is leaked in filenames" issue, I use another project by Jason, ctmg[0]. I changed the pass directory to be one directory deeper, encrypted it and just do `ctmg open` when I boot to open my password list (similar to unlocking a keypassX store) then use pass as normal. On shutdown, the opened folder is re-encrypted automatically. You could also set a ctmg close on a timer if you don't want the list to be available during your entire session after open.

Other things I do:

* store all the files as .toml files so I can rip specific keys with a custom script.

* Have a directory for web so `pass web` will give me all websites. Have a script to fill username pass for each.

* Have a directory for contacts. Then wrote a script to generate vCard files by crawling and pulling keys, base64 profile images and all.

* use syncthing to keep all devices up to date.

It's pretty slick workflow IMHO

[0] https://git.zx2c4.com/ctmg/about/


Since pass supports extension, you can make your setup less complex using pass-tomb. 'pass-tomb' keep the whole tree of password encrypted inside a tomb, see https://github.com/roddhjav/pass-tomb


Nice to hear somebody out there is using ctmg. I never bothered making packages for distros other than Gentoo, but ctmg is quite useful so maybe I'll do that.


Cheers. Yeah for sure, I was too lazy to make a PR on nixpkgs, but this[0] is what I wrote if anyone stumbles on this using NixOS. The nix package manager can be installed on top of most OS's too.

[0]: https://pastebin.com/raw/FYMean1q


looks like a nice setup, but what about mobile?


Syncthing has a mobile app and there's an app for pass called PasswordStore[0] using OpenKeychain[1] (pgp manager). I'm not a fan of putting my private key on my mobile, but if I were, this would be a nice setup.

[0]: https://github.com/zeapo/Android-Password-Store

[1]: https://github.com/open-keychain/open-keychain

Edit: yeah for ctmg support, probably have to hold out for something like PostMarketOS to save us.


You don't need to put your private key on a mobile device. You can create a separate key for each device. Pass supports multiple keys.


If your phone has NFC you can use a YubiKey to store the gpg key and decrypt the password via NFC.


yes, I saw this for pass, but I was referring to his setup where he uses ctmg also


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: