The performance difference between dev and prod builds of React is significant in my experience. In tests of an animated interface (fairly heavy app otherwise as well), a prod build did 2-3x the raw FPS of the dev build.
I can't really imagine what type of benefit they believe they're getting. Your error tracking system does re-assemble the stack from a out-of-band delivered sourcemap, even in prod, even with multiple versions deployed simultaneously... right? Like any decent error tracking system such as Rollbar or Sentry will do out of the box. Tell me you didn't build your own and can't even unwind a production error trace.
Atom and Slack often get critiqued for their use of an HTML5 stack. I'd also like better performance when it comes to startup time, memory usage and responsiveness regarding interactions (I'm using a 12" Macbook) but the stack is so useful and universal that it is here to stay and worth figuring out how to get to great performance (similar to how Emacs figured out how to work around slow startup).
Are there examples for HTML5 apps that manage to have great performance? Are there inherent limits?
If you look outside first world countries, yes. Check out the progressive web apps talks in Google IO 2017 - most of the examples on stage were indian companies flipkart.com, housing.com and a few others. I work on www.justickets.in myself.
We have a huge Android market here, but they're all phones with very low storage and slow 3G connections, so the default way to get an 'app' out is a HTML5 progressive web app. And the companies here work hard to make things run fast and smooth on slow devices.
I'm not sure how effective it is but I always liked the idea of forcing engineers to use those 3g speeds 1 day a week so they are more inclined to support those customers. Iirc FB does this to some degree, but I think they have a different lightweight version of FB intended for those audiences
Chrome dev tools has a reasonable connection bandwidth simulator. Is there a good one that also simulates variability / packet loss in addition to bandwidth caps?
It'd be nice to have a simple way to limit a chrome tab's available CPU in the same way. The computers, tablets and phones most people browse on are an order of magnitude slower than the desktop machines we use to develop.
Yeah I remember back in India page speed was a real concern not because google could penalize you but the network speed was so slow. Not sure now but back then Bsnl used to suck big time.
After coming to Australia, everyone here treats +4mb websites as normal which was a real mind twister for me. Thankfully pagespeed has always been my concern regardless of network speeds due to my 'conditioning' years ago.
It's pretty buggy, though. On Windows, at least once a day it resizes down to a window only big enough to hold the X button to close the window. And it drains the hell out of my battery on Android.
There's been articles on HN specifically about how slow VS Code is, particularly the one about rendering cursor blinking, and of course anything running on Electron will never compete work the likes of vim, &c.
However, as Electron apps go, I've found it noticeably faster and less resource intensive than any others. Slack in particular is comparatively terrible, as was Atom in the past (people have said it's improved but I've not felt the need to try it again recently)
Currently running Slack App, Atom App, and Spotify App on my MBP. According to my system stats, I'm consuming 3GB of RAM across the entire system (OS and all of the above).
So what I would like to know is wtf everyone is doing in these apps to create this CRAZY amount of resource hogging they keep bitching about? Or are they rockin chomebooks? Seriously? I want to know where all of this nonsense is coming from?
That is a crazy amount of resource hogging at 3GB.
You're running a music player, a text editor (or IDE), and a chat client. I would expect reasonably that would use less memory. One point you could make is that even native application alternatives, ones that don't use Electron, would be bloated and I would have to agree. Visual Studio .NET is pretty darn slow/cumbersome itself.
Back in 1999 I was running Emacs, xchat and freeamp on my Pentium 66 with 16Mb of RAM. (an weak machine, even for 1999 but with Linux Mandrake and WindowMaker it was usable)
The UI was less sophisticated and it doesn't involve network connection but sometimes I'm wondering how did we get there...
A lot of this comes back to higher expectations: we now have much larger displays, color depths, instead of simple bitmap fonts we have much higher-quality rendered vectors with advanced layout systems which can complex scripts, etc. Instead of rendering into a shared buffer, each window has at least one (on OS X, two) full buffers and the whole thing is composited, which is great for responsiveness and visual quality but definitely uses In the 90s, 640x480 was a common display resolution — now Apple recommends that developers ship 512x512 icons.
That's not to say that there aren't decisions to reconsider about code size, resource formats, etc. but I think it's easy to forget how much more behaviour has moved into our default baseline assumptions.
In my very limited experience, this makes programming anything several times more complicated. Writing a document editor? Difficult. Writing a networked, multi-user document editor? As difficult as the last thing but with asynchrony and lots of new failure modes.
Slack on my computer, with five teams, consumes 1.8gb of RAM.
Microsoft Word, with a multi-page complicated document, is using 200mb.
Just because we _have_ the resources to allow applications to bloat doesn't mean we should be alright with it. Its a similar argument to electric versus ICE cars; just because gas is cheap doesn't mean we shouldn't support EVs. EVs are fundamentally better for the future.
If we focus on performance optimizing the apps we use, it opens up a wide range of new computing hardware. You make fun of chromebooks, but imagine how much more interesting your work would be if you could use a machine with that performance. Battery life would be substantially higher. The cost of the machine would be substantially lower. The only reason most people need massive performance beasts is because we've pushed abstraction hell and bad engineering practices, so all of our tools suck.
Docker is another offender in recent times. Its great if you're on Linux, but few people are. So on OSX, we need to run a VM, so there goes another gig of memory.
3GB of RAM is precious (and I say that as someone who has 128GB)!
Your bar should not be, "Well, I don't see a slowdown." Many people still develop software on computers with only 8GB total RAM. Throw in a browser and one or two more applications and you're getting close to exhausting that.
Eventually you are going to exhaust your memory resources; using software with poor performance constraints is (in my opinion) inherently antagonistic to a development machine. That's not to say I'm against desktop JavaScript/HTML5 apps categorically, I just think they need much better optimization.
Most of the time Slack is behaving quite well but I've experience it running into 80+% CPU for long periods of time while eating 2-3GiB of RAM, I always forced quit and didn't bother. But yes, sometimes it turns into a resource hog, same with VSCode or Atom, something sometimes makes them feel unresponsive and "laggy".
how's that even possible? I mean just the kernel + mds_store and mdworker of mac take like 2gb of memory.
I also run two java processes with 1gb predefined (well probably I could fine tune them to use 512mb).
and one angular/cli which accounts for 800mb.
plus my database which got 500mb aswell.
spotify on mac used way more than 500mb memory, the last time I used it and also had an extreme amount of i/o pressure.
atom actually did use at least 500 mb memory as well, with just the c# plugin.
Plenty. Discord, VSCode, and MongoBooster are all apps that I feel hit 90% of native performance while still being based on web tech. Hyper terminal is another one which has very specific reproducible performance issues, but overall is pretty great and fast.
Its incompetence on behalf of Slack and Github, not the tech itself.
It does show a little of React's dark side though. It's worth asking... Why would Slack need to debug into their view rendering library on customer machines? It's not their code that's made accessible here, it's React's.
React introduces a layer of indirection into your rendering. Rather than saying "draw a profile pic" and then "a profile pic means these DOM manipulations", with React you add another layer: "hey, oracle, what does the whole page look like this now, and what DOM manipulations should we do to get there?"
It's like having an AI look at your code and decide how it should actually run. Very cool, very futuristic. Not convinced it's a practical decision.
Putting something like that in your app means for a much more complex debugging process. It always seemed like the React team was trying to solve a really really hard problem in order to avoid solving an easier problem a bunch of times. Which is classic programmer thinking, and generally a good strategy. But I wonder if it's good for us in the end.
The principle you could adopt, which would make React a bad choice, is something like "solve one hard problem if it means you can avoid solving a lot of easy problems, but only if you're solving those actual small problems in bulk. If you're only solving a subset of them, you're going to cause problems for yourself later."
Which DOM queries and manipulations make sense? Is React actually answering that question for you, or is it mashing the buttons until it works?
I think you are overthinking the meaning of this. As engineers we tend to blame complex layers we didn't write, especially if we don't have insight into them..
Giving their engineers the option to peer into react whenever they like is probably getting them more pure slack bugs fixed. That is not to say I think the current slack resource use is acceptable. There should be a middle ground where most users aren't wasting resources all of the time.
How is this absurd? I've had problems debugging minified Javascript in production too. So from my point of view, this seems like an appropriate response.
There are ways to debug minified JS in production, sourcemaps being one of them.
In addition to the speed difference, there's a significant payload size difference: something like 1.5+MB for the dev build, vs 70kb for the production build, last time I checked.
Sourcemaps have the drawback that you actually need to generate them first. Sometimes, getting all of the tooling together is teedious.
And about the size: if the JS file is cached properly, downloading 1.5mb once doesn't seem like a huge issue, especially for something like Slack where only desktops use the web client (most mobiles probably download the Slack app). I don't know if this was the case here though.
Privacy is a huge concern. It's difficult to phrase this properly without sounding like an asshole, but Discord's development team has a lot of very young people (no proof, but I'd swear some of them are 13-16 years old). If you've ever listened in on their Discord voice channels, the level of immaturity is astounding. Power to them - it's great to see talented youth create a great product. I just wouldn't trust them with a business's confidential information.
Discord's specifics aside, it blows my mind that people use any off-site SAAS for internal communications, including Slack. The database set contains so much critical information, including infrastructure passwords, private SSL and SSH certs, etc. Employees will send anything and everything over the company's chat client. Even if you trust the company's employees and general security, the fact remains that your uploaded attachments typically have a public URL not requiring authentication to download.
End-to-end encryption is essentially useless/unsolved when it comes to large, dynamic groups like Slack channels. Any new member who joined wouldn't be able to see any history in any channel.
But this is not a problem of Discord but of all group chats. The initial discussion point was that Discord might be insecure because of young hackers building the product.
Do you find Discord to perform better than Slack? Any thoughts as to why?
I've used Discord for gaming related purposes on another computer, but it didn't feel... "better" to me. Sluggish at times, but in understandably large rooms with lots of traffic.
I'm mainly curious in your statement as to how Discord performs compared to Slack, and if better, how they achieve "better".
With Slack you need for every chat a new login/password and 100 clicks for sign-up, skip tutorial and email-verification, WTH why? Getting users into Discord is one single click. With Discord you have one login and you can have different names per server.
And Slacks admin pages are just a confusing pile of mess, my .vimrc is light-weight compared. Slack is so overrated and people just use it because everybody does.
> Slack is so overrated and people just use it because everybody does.
Sidenote, I use it because the integrations take a burden off of me. Ie, my CI might not have integrations for Discord. My CI has plugins, and do those have integrations for Discord too? Etcetc. Being the most popular is not always just due to "we use it because everyone does".
Personally, I dislike how expensive Slack is. I'm not sure if I prefer Slack over Discord or not, but even if I preferred Discord, I'd still choose Slack for the integrations that I don't have to write myself.
I've already used it, quite a bit, as I mentioned. Not for work specifically, though. And never the API of course. Heck, I probably have it open at home right now haha.
I made the opposite experiece and just made a test with some medium and similar sized chat servers with many users amd history (bitcoin related chat servers) on Slack and Discord in Chrome which should represent their native Electron apps:
Maybe some more of us could do some tests as well. Anyways, I really can't believe that Discord is sluggish. Slack definitely is. First I thought it is so slow because they load all the chat history but they don't.
I do for outside-asking-for-help. At my current client internal communications have to be self-hosted so neither Slack nor Discord is an option; if I were still at the startup where I worked previously (where teams chose their own tools) I'd certainly be pushing Discord.
As an IRC, Slack, and Matrix user and netadmin, I will say that Slack and other nextgen chat services have a few advantages over IRC:
- automatic backlog resumption
- server stored pageable backlog
- continuous presence (you don't disappear from the channel when you leave, you're just marked as away)
- authentication built into the spec rather than implemented using a "services" layer
- general upgrades to what is allowed in the chat, such as long pastes, inline images, and file uploads
If you haven't tried Matrix yet (the best client is probably Riot at https://riot.im/ ), I encourage you to give it a try. It's essentially next generation IRC.
Sure, but a bouncer is another layer on top of your connection to the server, when architecturally it's much better to have the server manage that for you.
There are a million more embedded features in Slack than there are in IRC. Sure someone could spend a year trying to implement all of those plugins in their private IRC server and it be less functional and user friendly, but why would someone want to waste their time and money doing that?
I wonder if they were for some reason unable to emit a sourcemap of their js bundle. Unless they are doing something special here, that should be sufficient for debugging.
Actually that is exactly what I'd been doing. I recently upgraded to 32 GB of RAM and now the desktop app is usable again, but obviously this is still not even reasonable.
WTF. I remember running a network of 10+ servers and worker-machines on a 4GB hosts not many years ago. And now you can't run a "modern" chat-app without having 32GBs of ram?
You know what? I'm happy with IRC. weechat uses maybe 28MBs and runs stellar.
Maybe you should just consider that the tools you are using simply are flawed and move to something proven instead?
Not true. Slack does not take 32GB of RAM to be usable, more likely the rest of his tools take 30GB of RAM. My laptop only has 8GB of RAM and I have Chrome and Firefox open and active while using Slack. Slack has been open for a couple of weeks and is using less than 500mb of RAM on my machine.
This is correct- my daily environment usually involves multiple VMs and a Docker Compose setup, either a JetBrains IDE or Visual Studio Code, Chrome with some reasonable number of tabs open, Slack, and a few utilities. To further illustrate the reason why this sounds insane, before I was at 32GB, I was at 8GB, and conserved RAM by running Linux with an extremely lean setup (i3 + st + Chrome.) Now I have plenty to spare, and I'm using a much more rich environment.
So yeah, it is a little more complicated than just "Slack needed 32GB" - in fact, and thankfully, even my whole stack fits comfortably. It's what I upgraded from that it didn't work well on.
I gave up on the Slack desktop client after regularly finding it using 3GB of RAM for no obvious reason. Switched to the browser client but found it buggier and needed to be refreshed periodically due to becoming so slow that there was a noticeable delay on keypresses. Now I'm using Slack via XMPP using Pidgin and things are much, much better.
You're all looking at this from a dev perspective ... look at Slack's customer base.
Slack Enterprise is expensive for a messaging service, and one of their main selling points is support and reliability. When a huge customer reports a breaking bug they need to resolve it ASAP, turnaround time is probably the number-one priority.
I've used it, it wasn't bad at all but it definitely clashed visually (not that electron apps don't.)
>I'm not sure if there's decent Windows support.
Cygwin with X11 is one of the first things I install on my work machine whenever I get a new job. It can't do fancy GTK stuff super well but most of the apps I use work great.
I believe this is a response saying that electron made cross platform easy but at the cost of performance. Native clients still aren't as easy to build as html+css+js and the barrier to entry or cost to maintain is too high vs wrapping the web app you already built.
I can't really imagine what type of benefit they believe they're getting. Your error tracking system does re-assemble the stack from a out-of-band delivered sourcemap, even in prod, even with multiple versions deployed simultaneously... right? Like any decent error tracking system such as Rollbar or Sentry will do out of the box. Tell me you didn't build your own and can't even unwind a production error trace.