Hacker News new | past | comments | ask | show | jobs | submit | flyingswift's comments login

Team Foundation Server -> Visual Studio Team Services -> Azure DevOps


The one at Georgia Tech also doesn't have parking


Good for people watching, that one is. We went there for breakfast every single day when I was in Atlanta to retrieve my OMSCS degree (we were staying in the GT hotel in the same building, so it was super convenient). I'm easy to please, and my waistline is better because we don't have Waffle House in the PNW. Maybe I'd grow tired of it, but no guarantees.


Most of the backend is written with Ruby


FB uses a separate IRC instance for these kinds of issues, at least when I used to work there


What is the safe way to achieve the same result?


The concept behind this is reference stability, and if you need a collection that has stable references, you must introduce a level of indirection, that is, instead of a vector<T>, you use a vector<unique_ptr<T>> and then you can take references as follows:

    auto& r = *some_vector[0];


Or std::deque. It's conceptually similar, but additional allocations are hidden under the hood and batched.


std::dequeue is as good as useless. The defaults for "batch size" in different compilers are at extreme opposites of the tradeoff spectrum. So unless you really don't care about performance, memory or portability, it's not a datastructure you can rely on.

From memory:

In MSVC, dequeue will allocate memory for every single element if your elements are > 8 bytes. This will never be changed, due to ABI compatibility.

Clang and gcc have batching sizes of 1K and 4K (i.e. you throw out a whole page of memory even if your dequeue contains only 1 element).


I had a vague feeling that std::deque is a "heavy" thing which you shouldn't have a million of, but iterating through a couple big ones is pretty fast. 1–4K batches wouldn't hurt my feelings. Looked up GCC, it's actually slightly less heavy at 512 bytes per node[1]. But the MSVC part — that caught me completely off guard.

https://github.com/gcc-mirror/gcc/blob/47749c43acb460ac8f410...


In general it's better to use the same standard library everywhere - discrepancies like that occur for almost every type so if you care about having the same performance on every platform... Either use libc++ or boost


IIRC boost.containers has a standard conforming deque with configurable batch size.


Thanks! I am just learning C++ for a new gig, and coming from Javascript land, it is a lot to take in :)


Yikes. I feel sorry for your client in advance. One does not learn C++ “for a gig”, especially when coming from JS…


For all you know, this person is simply talking about a new job. Is it necessary to be so condescending..? Sigh


I would generally suggest avoiding reference stability here (extra heap allocations) and going with the offset-based approach mentioned in the other responses.


I would generally suggest going for correctness over performance and the solution I provided is correct in the general case. Using an offset is only correct in the special case where objects will not be inserted or removed at an index less than the offset, otherwise you will end up with bugs as the offset becomes invalid upon such operations.

Furthermore, depending on the size of T, the performance penalty of the extra heap allocations is amortized over the cost of resizing the vector. That is vector reallocation is significantly faster for a unique_ptr<T> than it is for T when T is large and almost all memory allocators are tuned to allocate objects close together in space when they are allocated close together in time, so you don't lose the cache locality or need to worry about memory fragmentation.


In addition to other answers, sometimes you do know the final/max length of the vector when you construct it. In that case reserve() can reserve the necessary space, and as long as you stay under the limit all the addresses will remain valid.

(Though it's still pretty brittle, so you may want to add a ton of comments to warn yourself in the future...)


store the index 3 as an int instead of &vec[3]


This is actually untrue, you want to do as much as possible in the renderer process. The main NodeJS process is responsible for all user interaction, including mouse clicks and keyboard input. If you block the main process, the entire app will become unresponsive


But the main NodeJS process can spawn (regular POSIX) worker threads, just like any other GUI app’s main (event loop) process; and those worker threads can both 1. load native libraries and call into them, and 2. communicate just as directly with the renderer as the main thread can. The renderer, meanwhile, only gives you ServiceWorkers; and those can’t do anything natively. (Plus, they have all the same IPC overhead to the main renderer context that calling the renderer from Node does.)

Think of it like this: let’s say you’re creating an Electron equivalent to Mathematica. You have a big native blob of maths evaluation code. Where are you going to run it — in the renderer (as Emscripten WASM) or in a worker thread of the native app (as a native static library or DLL)?

Or let’s say you’re doing an Electron BitTorrent client. Where are you going to handle the network connections and do the file management and... basically everything the app does? Well, in this case, you have no real option: the renderer can’t open raw TCP sockets. You’ve got to do it native. (But it would have been the better choice anyway, for IOPS reasons—localStorage + virtualized attachment downloads don’t buy you very much disk concurrency.)

A less clear-cut case is a game engine. The answer there depends on whether you can get a handle to the renderer’s Canvas from your native code. If so, then the choice is obvious: native game engine, draw to renderer’s canvas. If not, it might still be more CPU efficient to go native: you might be able to approximate that over RPC if your game has a bandwidth-efficient wire protocol representation of its render command stream (as e.g. most 2D tile based games do.) Only if neither of these work would putting the game engine into the renderer be optimal.


For what my anecdote is worth, I have seen sentiment like this across lots of social media, and it has gotten louder since the protests in June. I have even seen echoes of it on Facebook Workplace, where there has been a clear and strong encouragement to become more 'antiracist'. I do not think that this is hyperbole.


Retail employees are also on that list somewhere


That is a large benefit of the doubt...


I have one of these for work and it's the first Macbook I have used

I have never experienced this behavior on any other laptop I have ever used. When I ran into this for the first time, I was completely appalled, it is a horrible user experience


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: