Hacker Newsnew | past | comments | ask | show | jobs | submit | more acters's commentslogin

Local hosting and community support I will always offer more freedom than the rather poor filtering that big tech loves to do. Like why not offer option to pay for copyright content? Seriously, if they can identify it is copyright then they can allow to pay for it, but nope they rather intentionally filter out all of it and end up with useless product that will eventually be phased out.


Also the gg in gguf


this is the least amount of loc I can think of rn

    data = { i+1 : sorted([ x for x in list(map(int, open('input').read().split()))[i::2]]) for i in range(2) }
    total_distance = sum(list(map(lambda x: abs(x[0]-x[1]), zip(data[1], data[2]))))
    print("part 1:", total_distance)
    similarity_score = sum(list(map(lambda x: (x*data[2].count(x))*data[1].count(x), set(data[1]).intersection(data[2]))))
    print("part 2:", similarity_score)


Mostly because such a system would install in fighting among programs that all will want to be prioritized as important. tbf it will mostly be larger companies who will take advantage of it for "better" user experience. Which is kind of important to either reduce to a minimal amount of running applications or simply control it manually for the short burst most users will experience. If anything cpu intensive tasks are more likely to be bad code than some really effective use of resources.

Though when it comes to gaming, there is a delicate balance as game performance should be prioritized but not be allowed to cause the system to lock up for multitasking purposes.

Either way, considering this is mostly for idle tasks. It has little importance to allow it to be automated beyond giving users a simple command for scripting purposes that users can use for toggling various behaviors.


You're talking about user-space preemption. The person you're replying to, and the article are about kernel preemption.


Games run in a tight loop, they don’t (typically) yield execution. If you don’t have preemption, a game will use 100% of all the resources all the time, if given the chance.


Games absolutely yield, even if the rendering thread tries to go 100% you'll likely still be sleeping in the GPU driver as it waits for back buffers to free up.

Even for non-rendering systems those still usually run at game tick-rates since running those full-tilt can starve adjacent cores depending on false sharing, cache misses, bus bandwidth limits and the like.

I can't think of a single title I worked on that did what you describe, embedded stuff for sure but that's a whole different class that is likely not even running a kernel.


Maybe on DOS. Doing any kind of IO usually implies “yielding”, which most interactive programs do very often. Exhausting its quantum without any IO decreases the task’s priority in a classic multilevel feedback queue scheduler, but that’s not typical for programs.


Games run in user space. They don't have to yield (that's cooperative multitasking), they are preempted by the kernel. And don't have a say about it.


Make a syscall for io. Now the kernel takes over and runs whatever it likes for as long as it likes.

Do no syscalls. Timer tick. Kernel takes over and does whatever as well.

No_HZ_FULL, isolated cpu cores, interrupts on some other core and you can spin using 100% cpu forever on a core. Do games do anything like this?


Pinning on a core like this is done in areas like HPC and HFT. In general you want a good assurance that your hardware matches your expectations and some kernel tuning.

I haven't heard of it being done with PC games. I doubt the environment would be predictable enough. On consoles tho..?


We absolutely pinned on consoles, anywhere where you have fixed known hardware tuning for that specific hardware usually nets you some decent benefits.

From what I recall we mostly did it for predictability so that things that may go long wouldn't interrupt deadline sensitive things(audio, physics, etc).


Nice, thank you


Thinking about it the threads in a game that normally need more CPU time are the ones that are doing lots of sys calls. You'd have to use a fair bit of async and atomics, to split the work into compute and chatting with the kernal. Might as well figure out how to do it 'right' and use 2+ threads so it can scale. Side note the compute heavy low sys call freqency stuff like terrain gen belongs in the pool of back ground threads, normaly.


Yeah you are right, however some of what I said does have some merit as there are plenty of things I talked about that apply to why you would need dynamic preemption. However, the other person who mentioned the issue with needing to take cpu cycles on the dynamic system that checks and might apply a new preemptive config is more overhead. The kernel can't always know how long the tasks will take so it is possible that the overhead for dynamically changing for new tasks that have short runtime will be worse than just preemptively setting the preemptive configuration.

But yeah thanks for making that distinction. Forgot to touch on the differences


I fully agree with you. I wish people would stop focusing heavily on how removing features can potentially decrease attacks. Especially when those exact features are used to prevent the WORST possible attack vector, a mishandled or exploited Clipboard. I do not see how else would a user be able to gain access to the passwords without either using the keyboard, or opening the edit dialogue to select "show password" which would just make them plain-text and MUCH easier to attack. These features were introduced to not only offer ease of use, but to facilitate a more secure method of transferring the password to the desired location. These maintainers clearly did NOT think these things through and potentially created a much less secure package than intended.


The data has to be decrypted and read, so eventually you reverse engineer the client and figure out how to decrypt on the fly, then they wise up and introduce key based signing, which you eventually try to steal from the client and breaking the encryption again, then anti cheat is implemented... thus, the cat and mouse game is born, lol


Once the siphoning happens on the same machine your client is running on it's easier to detect through anti-cheats at least. If it can run on a completely separate machine it seems like it'd be essentially impossible to detect except through changes in how a user acts like only going directly to the mobs with the juicy loot and ignoring the trash but that's really tough to detect.


Hypothetically the client doesn't really have to know about the juicy loot until it's dropped, right? On a sufficiently fast internet connection, the client doesn't need to know about anything until exactly the time when the player needs to know it, at which point revealing it in a cheating tool is meaningless.


At the very least with everquest (iirc) npcs would sometimes use their loot. I recall tanks occasionally letting the rest of the raid know what weapon drop the boss had on them because they were seeing a different damage type (pierce, slash, bludgeon instead of hit) and the boss was known to sometimes drop a certain piercing weapon (for example).

That being said, I can totally think of a few ways to get around that. It's like you said, the client doesn't really need to know until the enemy is looted.

[Actually, the one exception that I can think of is that rogues can pickpocket certain loot. And while pinging the server to generate loot once the npc is dead feels like it shouldn't be a major problem, having to ping the server to generate loot while the npc is still alive does make the system architect in me feel a bit more nervous ... at the very least for systems as they were when EQ first came out.]


NPCs also gain stat bonus from their equipment. NPCs wear every slot except only 1 wrist,finger,ear. Big difference between a mage pet in full banded vs naked.


Modern games you don't need to unless the boss changed depending on the loot and even then it would be tangential. Don't forget this is EQ2 we're talking about internet back then was sloooooow and online games of that type were pretty new so designs and security were still being sorted out. Now you know from the beginning that any useful information about the enemy and world will be pried out of your game so you go through the whole anti-cheat cat and mouse game.


The final stage beeing the Opt player kill & Bann. Compute a all knowing AI and hold it's behavior against that of players, then cull the closest percentage.


It would also help to not send the high-value information to the client until required. Especially loot drops!


But then you have to pay for more server side compute! Think of the profit margins!


Sending loot on NPCs to every client costs more, not less.

The reason why they probably did it is because NPCs actually used the items. When the froglok King loaded his two handed sword, he was actually using it. And when he didn't load it, he wasn't.


Won't you have to compute it yourself in the server anyway to be sure that the client is not lying?


I dont buy the anti theft angle either. People's phones still end up stolen, and they are still contacted by the thieves to remove the icloud account. ICloud is a good enough feature to prevent theft, and having authorized repair options in it is great. So, that notion is already pretty bad. If someone replaces the motherboard with a blank iPhone(no iCloud attached), then a check of parts that are serialized to an iCloud account should be implemented to prevent harvesting parts from a locked iPhone. There are better consumer friendly methods that Apple simply ignores.


the more interesting thing is why the default state has to be made vulnerable in the first place instead of just making lockdown the default method of using an apple device


The even more interesting thing is that all functionality increases the attack surface and therefore makes all devices more vulnerable. The most secure state is not to have the device at all or, failing that, to have it permanently turned off. This is true of every device, not just apple.

The reason people possess devices is to use functionality and therefore they have to make some tradeoffs in terms of security. The default state is what apple currently think is the best tradeoff in terms of risk vs functionality for most people. For people with an extremely unusual threat profile it stands to reason a different tradeoff might be appropriate.


Great reply, but don't forget to add Apple's bottom line to the balance beam of user risk and device functionality there


True.

That said, they do give a lot of granular control to the user to turn off individual functions if the user feels differently and wants to change their stance eg iMessage can be disabled with a switch in settings.


Because it turns off a lot of functionality people like:

https://support.apple.com/en-us/HT212650

This is a classic challenge for security: every feature expands the attack surface, but users often pick what to buy based on those features.


Isn't there something like a 50% performance hit too, since it turns off a lot of optimizations?


In Safari, yes, losing the JavaScript JIT is hefty but I’d somewhat cynically argue that it’s probably balanced out performance-wise if you install an ad blocker.


Lots of people would be blocked iMessaging each other TIF images.


also substantially slower than the speed of light, while not the biggest factor is also slower than other forms of transmission.

As you said, there is a lot of noise that needs powerful error correction to be used, such as how the Reed-Solomon code was used in deep space communication. Prepackaging information before relying on wireless communication is usually the most necessary part of any reliably complex system.


If they had contact less, then they wouldn't be able to gather the data from their mobile device...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: