I recently compared performance per dollar for CPUs and GPUs on benchmarks for GPUs today vs 10 years ago, and suprisingly, CPUs had much bigger gains. Until I saw that for myself, I thought exactly the same thing as you.
It seems shocking given that all the hype is around GPUs.
This probably wouldn't be true for AI specific workloads because one of the other things that happened there in the last 10 years was optimising specifically for math with lower size floats.
It's coz of use cases. Consumer-wise, if you're gamer, CPU just needs to be at "not the bottleneck" level for majority of games as GPU does most of the work when you start increasing resolution and details.
And many pro-level tools (especially in media space) offload to GPU just because of so much higher raw compute power.
So, basically, for many users the gain in performance won't be as visible in their use cases
> If a person abuses the shared kitchen, they get kicked out. This is a business.
Not any business, it's a landlord-tenant relationship.
You can't simply kick out a tenant. You have to do a formal eviction process. In many cities this requires collecting evidence of contractual breach, proving that the tenant was notified they were being evicted (such as through a paid service to officially serve and record delivery of the notice), and then following the appropriate waiting period and other laws. It could be months and tens of thousands of dollars of legal fees before you can kick someone out of a house.
Contrast that with the $213 inflation-adjusted monthly rent that the article touts. How many months of rent would they have to collect just to cover the legal fees of a single eviction?
I think the biggest tool is higher expectations. Most programmers really haven't come to grips with the idea that computers are fast.
If you see a database query that takes 1 hour to run, and only touches a few gb of data, you should be thinking "Well nvme bandwidth is multiple gigabytes per second, why can't it run in 1 second or less?"
The idea that anyone would accept a request to a website taking longer than 30ms, (the time it takes for a game to render it's entire world including both the CPU and GPU parts at 60fps) is insane, and nobody should really accept it, but we commonly do.
Pedantic nit: At 60 fps the per frame time is 16.66... ms, not 30 ms. Having said that a lot of games run at 30 fps, or run different parts of their logic at different frequencies, or do other tricks that mean there isn't exactly one FPS rate that the thing is running at.
The CPU part happens on one frame, the GPU part happens on the next frame. If you want to talk about the total time for a game to render a frame, it needs to count two frames.
Computers are fast. Why do you accept a frame of lag? The average game for a PC from the 1980s ran with less lag than that. Super Mario Bros had less than a frame between controller input and character movement on the screen. (Technically, it could be more than a frame, but only if there were enough objects in play that the processor couldn't handle all the physics updates in time and missed the v-blank interval.)
If Vsync is on which was my assumption from my previous comment, then if your computer is fast enough, you might be able to run CPU and GPU work entirely in a single frame if you use Reflex to delay when simulation starts to lower latency, but regardless, you still have a total time budget of 1/30th of a second to do all your combined CPU and GPU work to get to 60fps.
Just as an example, round trip delay from where I rent to the local backbone is about 14mS alone, and the average for a webserver is 53mS. Just as a simple echo reply. (I picked it because I'd hoped that was in Redmond or some nearby datacenter, but it looks more likely to be in a cheaper labor area.)
However it's only the bloated ECMAScript (javascript) trash web of today that makes a website take longer than ~1 second to load on a modern PC. Plain old HTML, images on a reasonable diet, and some script elements only for interactive things can scream.
In the cloud era this gets a bit better but my last job I removed a single service that was adding 30ms to response time and replaced it with a consul lookup with a watch on it. It wasn’t even a big service. Same DC, very simple graph query with a very small response. You can burn through 30 ms without half trying.
This is again a problem understanding that computers are fast. A toaster can run an old 3D game like Quake at hundreds of FPS. A website primarily displaying text should be way faster. The reasons websites often aren’t have nothing to do with the user’s computer.
That’s per core assuming the 16ms is CPU bound activity (so 100 cores would serve 100 customers). If it’s I/O you can overlap a lot of customers since a single core could easily keep track of thousands of in flight requests.
Uber could run the complete global rider/driver flow from a single server.
It doesn't, in part because all of those individual trips earn $1 or more each, so it's perfectly acceptable to the business to be more more inefficient and use hundreds of servers for this task.
Similarly, a small website taking 150ms to render the page only matters if the lost productivity costs less than the engineering time to fix it, and even then, only makes sense if that engineering time isn't more productively used to add features or reliability.
Practically, you have to parcel out points of contention to a larger and larger team to stop them from spending 30 hours a week just coordinating for changes to the servers. So the servers divide to follow Conway’s Law, or the company goes bankrupt (why not both?).
Microservices try to fix that. But then you need bin packing so microservices beget kubernetes.
I'm saying you can keep track of all the riders and drivers, matchmake, start/progress/complete trips, with a single server, for the entire world.
Billing, serving assets like map tiles, etc. not included.
Some key things to understand:
* The scale of Uber is not that high. A big city surely has < 10,000 drivers simultaneously, probably less than 1,000.
* The driver and rider phones participate in the state keeping. They send updates every 4 seconds, but they only have to be online to start a trip. Both mobiles cache a trip log that gets uploaded when network is available.
* Since driver/rider send updates every 4 seconds, and since you don't need to be online to continue or end a trip, you don't even need an active spare for the server. A hot spare can rebuild the world state in 4 seconds. State for a rider and driver is just a few bytes each for id, position and status.
* Since you'll have the rider and driver trip logs from their phones, you don't necessarily have to log the ride server side either. Its also OK to lose a little data on the server. You can use UDP.
Don't forget that in the olden times, all the taxis in a city like New York were dispatched by humans. All the police in the city were dispatched by humans. You can replace a building of dispatchers with a good server and mobile hardware working together.
You could envision a system that used one server per county and that’s 3k servers. Combine rural counties to get that down to 1000, and that’s probably less servers than uber runs.
What the internet will tell me is that uber has 4500 distinct services, which is more services than there are counties in the US.
The reality is that, no, that is not possible. If a single core can render and return a web page in 16ms, what do you do when you have a million requests/sec?
The reality is most of those requests (now) get mixed in with a firehose of traffic, and could be served much faster than 16ms if that is all that was going on. But it’s never all that is going on.
This is a terrible time to tell someone to find a movable object in another part of the org or elsewhere. :/
I always liked Shaw’s “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
The amount of drama about AI based upscaling seems disproportionate. I know framing it in terms of AI and hallucinated pixels makes it sound unnatural, but graphics rendering works with so many hacks and approximations.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
> AI upscaling is equivalent to lowering bitrate of compressed video.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
> I am certainly not going to celebrate the reduction in image quality
What about perceived image quality? If you are just playing the game chances of you noticing anything (unless you crank up the upscaling to the maximum) are near zero.
The contentious part from what I get is the overhead for hallucinating these pixels, on cards that also cost a lot more than the previous generation for otherwise minimal gains outside of DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
I also find them completely useless for any games I want to play. I hope that AMD would release a card that just drops both of these but that's probably not realistic.
They will never drop ray tracing, some new games require ray tracing. The only case where I think it's not needed is some kind of specialized office prebuilt desktops or mini PCs.
There are a lot of theoretical arguments I could give you about how almost all cases where hardware BVH can be used, there are better and smarter algorithms to be using instead. Being proud of your hardware BVH implementation is kind of like being proud of your ultra-optimised hardware bubblesort implementation.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
> But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
There were and always are people who swear to not see the difference with anything above 25hz, 30hz, 60hz, 120hz, HD, Full HD, 2K, 4K. Now it's ray-tracing, right.
I can see the difference in all of those. I can even see the difference between 120hz and 240hz, and now I play on 240hz.
Ray tracing looks almost indistinguishable from really good rasterized lighting in MOST conditions. In scenes with high amounts of gloss and reflections, it's a little more pronounced. A little.
From my perspective, you're getting, like, a 5% improvement in only one specific aspect of graphics in exchange for a 200% cost.
CP2077 rasterization vs ray tracing vs path tracing is like night and day. Rasterization looks "gamey". Path tracing makes it look pre-rendered. Huge difference.
CP2077 purposefully has as many glossy surfaces as humanly possible just for this affect. It somewhat makes sense with the context. Everything is chrome in the future, I guess.
As soon as you remove the ridiculous amounts of gloss, the difference is almost imperceptible.
There’s an important distinction between being able to see the difference and caring about it. I can tell the difference between 30Hz and 60Hz but it makes no difference to my enjoyment of the game. (What can I say - I’m a 90s kid and 30fps was a luxury when I was growing up.) Similarly, I can tell the difference between ray traced reflections and screen space reflections because I know what to look for. But if I’m looking, that can only be because the game itself isn’t very engaging.
I think one of the challenges is that game designers have trained up so well at working within the non-RT constraints (and pushing back those constraints) that it's a tall order to make paying the performance cost (and new quirks of rendering) be paid back by RT improvements. There's also how a huge majority of companies wouldn't want to cut off potential customers in terms of whether their hardware can do RT at all or performance while doing so. The other big one is whether they're trying to recreate a similar environment with RT, or if they're taking advantage of what is only possible on the new technique, such as dynamic lighting and whether that's important to the game they want to make.
To me, the appeal is that game environments that can now be way more dynamic because we're not being limited by prebaked lighting. The Finals does this, but doesn't require ray tracing and it's pretty easy to tell when ray tracing is enabled: https://youtu.be/MxkRJ_7sg8Y
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
There's also Metro: Exodus, which the developers have re-made to only support RT lighting. DigitalFoundry made a nice video on it: https://www.youtube.com/watch?v=NbpZCSf4_Yk
naive q: could games detect when the user is "looking around" at breathtaking scenery and raytrace those? offer a button to "take picture" and let the user specify how long to raytrace? then for heavy action and motion, ditch the raytracing? even better, as the user passes through "scenic" areas, automatically take pictures in the background. Heck, this could be an upsell kind of like the RL pictures you get on the roller coaster... #donthate
Even without RT I think it'd be beneficial to tune graphics settings depending on context, if it's an action/combat scene there's likely aspects the player isn't paying attention to. I think the challenge is it's more developer work whether it's done by implementing some automatic detection or manually being set scene by scene during development (which studios probably do already where they can set up specific arenas). I'd guess an additional task is making sure there's no glaring difference between tuning levels, and setting a baseline you can't go beneath.
It will never be fast enough to work in real time without compromising some aspect of the player's experience.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
Manually placed lights and baking not only takes time away from iteration but also takes a lot of disk space for the shadow maps. RT makes development faster for the artists, I think DF even mentioned that doing Doom Eternal without RT would take so much disk space it wouldn’t be possible to ship it.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
The quoted number was in the range of 70-100 GB if I recall correctly, which is not that significant for modern game sizes. I’m sure a lot of people would opt to use it as an option as a trade off for having 2-3x higher framerate. I don’t think anyone realistically complains about video game lighting looking too “gamey” when in a middle of an intense combat sequence. Why optimize a Doom game of all things for standing still and side by side comparisons? I’m guessing NVidia paid good money for making RT tech mandatory.
And as for shortened development cycle, perhaps it’s cynical, but I find it difficult to sympathize when the resulting product is still sold for €80
It's fast enough today. Metro Exodus, an RT-only game runs just fine at around 60 fps for me on a 3060 Ti. Looks gorgeous.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
Most modern engines support (and encourage) use of a mixed lighting mode. You can have the best of both worlds. One directional RT light probably isn't going to ruin the pudding if the rest of the lights are baked.
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Not OP, but a lot of the current kvetching about hardware based ray tracing is that it’s basically an nvidia-exclusive party trick, similar to DLSS and physx. AMD has this inferiority complex where nvidia must not be allowed to innovate with a hardware+software solution, it must be pure hardware so AMD can compete on their terms.
1. People somehow think that just because today's hardware can't handle RT all that well it will never be able to. A laughable position of course.
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
The gimmicks aren't the product, and the customers of frontier technologies aren't the consumers. The gamers and redditors and smartphone fanatics, the fleets of people who dutifully buy, are the QA teams.
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
And they're achieving "acceptable" frame rates and resolutions by sacrificing image quality in ways that aren't as easily quantified, so those downsides can be swept under the rug. Nobody's graphics benchmark emits metrics for how much ghosting is caused by the temporal antialiasing, or how much blurring the RT denoiser causes (or how much noise makes it past the denoiser). But they make for great static screenshots.
I disagree. From what I’ve read if the game can leverage RT the artists save a considerable amount of time when iterating the level designs. Before RT they had to place lights manually and any change to the level involved a lot of rework. This also saves storage since there’s no need to bake shadow maps.
So what stops the developers from iterating on a raytraced version of the game during development, and then executing a shadow precalcualtion step once the game is ready to be shipped? Make it an option to download, like the high resolution texture packs. They are offloading processing power and energy requirements to do so on consumer PCs, and do so in an very inefficient manner
> Someone who thinks COVID was a hoax isn't going to be one to dig deep.
This is kind of a side point, but people with fringe beliefs tend to dig a lot deeper to validate those opinions than those with a mainstream view.
You can bet that someone who thinks that the moon landing was a hoax to the point that they would tell someone about it will know more about the moon landing than a random person who believes it was real.
It often takes an expert in something to shoot down the arguments.
> but people with fringe beliefs tend to dig a lot deeper
Do they actually, though? Or do they just look for endless superficial surface claims?
I mean, if they actually dug deep they're going to encounter all kinds of information that would indicate that the moon landing was real. Which, then, if they still maintain that it was a hoax in light of that then they have to believe that the deep information is also a hoax. So if someone really was digging deep into personal details of your life, then what they read about you must also be a hoax, naturally.
Which, given the concern, one may as well solidify by putting fake information out there about themself. No sane person is going to be searching high and low for details about your personal life anyway. A moon landing hoax believer isn't going to buy into a published academic paper or whatever breadcrumb you accidentally left as a source of truth to prove that you have a PhD when a random website with a Geocities-style design says that you never went to college!
There is an infinite supply of people spouting bullshit and validation of that bullshit on the internet. You can spend a lifetime reading through that bullshit, and certainly feel like you're "doing research".
I am utterly fascinated by the flat earth movement, not because I believe in a flat earth, but because it's so plainly idiotic and yet people will claim they've done experiments and research and dug deep, primarily because they either don't know how to read a paper or how to interpret an experiment or simply don't know how lenses work. It's incredible.
> You can spend a lifetime reading through that bullshit, and certainly feel like you're "doing research".
I'm not sure broad and deep are the same thing, but maybe we're just getting caught up in semantics?
> It's incredible.
Does anyone truly believe in a flat earth, though, or is it just an entertaining ruse? I hate to say it, but it can actually be pretty funny watching people nonsensically fall over themselves to try and prove you wrong. I get why someone would pretend.
> I'm not sure broad and deep are the same thing, but maybe we're just getting caught up in semantics?
They’re not the same thing but I think they’re still going “deep” in that they will focus very heavily on one subject in their conspiracy rabbit hole.
> Does anyone truly believe in a flat earth, though, or is it just an entertaining ruse?
I think that a lot of people are faking, but I am pretty convinced that at least some people believe it. There was that dude a few years ago who was trying to build a rocket to “see if he could see the curve”, for example.
I have seen some fairly convincing vlogs where the people at least seem to really believe it.
> I think they’re still going “deep” in that they will focus very heavily on one subject in their conspiracy rabbit hole.*
Which is totally fair, but may not be what I imagined when I said "deep".
> There was that dude a few years ago who was trying to build a rocket to “see if he could see the curve”, for example.
Building a rocket sounds like fun, to be honest. If you are also of the proclivity that you are entertained by claiming to believe in a flat earth, combining your hobbies seems like a pretty good idea.
> I have seen some fairly convincing vlogs where the people at least seem to really believe it.
At the same time people don't normally talk about the things they (feel they) truly understand. It is why we don't sit around talking about 1+1=2 all day, every day. Humans crave novelty. It is agonizing having to listen to what you already know. As such, one needs to be heavily skeptical of someone speaking about a subject they claim to understand well without financial incentive to overcome the boredom of having to talk about something they know well. And where there is financial incentive, you still need to be skeptical that someone isn't just making shit up for profit.
When someone is speaking causally about something, you can be certain that either: 1) They recognize that they don't have a solid understanding and are looking to learn more through conversation. 2) Are making shit up for attention.
There is no good way to know how many flat earthers never speak of it, I suppose, but as far as the vocal ones go I don't suppose they are really looking to learn more...
When I built my house I went full home automation. At the time I was telling my friends about how important it was not to have cloud dependancy, and how I was doing everything local.
I use KNX as the main backbone and Home Assistant for control.
And everything was local with the one exception of my Kevo door lock. At the time I built, there just wasn’t a perfect local only solution.
I hadn’t planned properly for a way to integrate a wired in solution into the joinary around the door due to the particular circumstances of where it was, so I needed something wireless, and nothing wireless was local only at the time.
What pisses me off is that it’s the one thing I compromised on, and it’s the one thing that bit me.
Now I have very little notice to find a replacement with the same features.
My house lock is probably the one place where I'm not prepared to compromise security with a DIY solution. Not talking about the software security (in fact open source solutions are probably more secure) but literally the hardware and build quality of any DIY work.
I think you'll find it not as comprising as you believe, and might be a fun project.
Since you'll likely be scrapping it in some fashion, might want to try disassembling it first to see what would need to be done.
If you are not handy with electronics, there is also a chance their will be some work around the 3rd party server at some point, as in the protocol and such being deciphered, or a custom firmware you can build and flash.
If you do get it working, it would make a great spare.
that's kind of funny though as any lock can be picked. if someone wants into your house, most of the time they will not enter the locked front door. they'll find a window in the back that is easier to open with whatever they find in your back yard. they might exit the front door on their way out though. also, most locks are easily picked by someone with practice
If memory serves, something like 2% of break ins use "lock picking" which includes shimming a sliding door, a very low skill attack. Criminals just don't use high skill attacks to burgle homes. Probably a combination of most crimes being opportunistic, most criminals doing them being low skilled themselves, and people like us not being rich enough to move into the level of being targeted by the minuscule percent of high skill burglars.
One of their digital lock designs had a rather cough Pleasing vulnerability. But other than that it's vendor lock-in (heh), and lack of availability in the US.
With most so called locksmiths being drillsmiths in the US, not being able to clone DD and dimple keys.
Puck one. Or maybe the OP is just bitter they can't pick it for their next "belt" after getting chuffed with themselves picking average american garbage.
Digital locks aside, this is more applicable to any lock you buy and rely on (substitute US with your local region):
> lack of availability in the US
I wouldn't go out of my way to find something like Schlage here, when Abloy (Assa Abloy) locks are available in abundance with locksmiths able to duplicate usually all the key variants.
No, there was a vending machine smart lock that if you hitachi'd it right it'd unlock.
And, I phrased it wrong: most people expect to be able to walk into lowes and clone a key. And while it seems assa has been on a buying spree since I last looked at them, I do not associate them with anything you'd be able to find at big box store. When I think assa abloy I think "you better have the key card or you're SOL."
As a European, most of the products mentioned in the linked article and this discussion are from brands I've never associated with Assa Abloy in the first place.
I do agree with you, but I think there a non-zero chance the situation might be different now.
We are not getting the same insane gains from node shrinks anymore.
Imagine the bubble pops tomorrow. You would have an excess of compute using current gen tech, and the insane investments required to get to the next node shrink using our current path might no longer be economically justifiable while such an excess of compute exists.
It might be that you need to have a much bigger gap than what we are currently seeing in order to actually get enough of a boost to make it worthwhile.
Not saying that is what would happen, I'm just saying it's not impossible either.
You kind of missed the point. It doesn't matter if what they did is or isn't real science. They believe it is, and so as far as they are concerned, it's proven.
So then what? Since they really believe what they said, how can you blame them for their actions?
You might argue that since they are wrong, their beliefs should be changed. Well sure, maybe they should.
You could commission a study to confirm that, then try to persaude people. Perhaps form a collective to persuade others of that belief. Oh wait....
You asked a question about what I would do if I had a belief that somebody was harming society.
Historically, my observation is that some of the most evil things ever perpetrated by humans were done in the name of trying to make society better. So I'm pretty hesitant to enforce my views on other people or even attempt to. If they were acting in good faith, this would be sort of like a Black Lives Matter type of approach that is trying to raise public awareness around an issue. But they're not acting in good faith in trying to get society to see their point of view. Instead, they are trying to go after the fulcrums of society and enforce their view using backroom deals. It's a transparent power play, and it's not in good faith: real good faith actors look at both sides of the issue, both the values and the harm, and they try to develop a balanced response. This is not what's happening here.
A company runs an online game. The actual online infrastructure is a bunch of different services that are held together with string. They have licenced some propriatry database. The studio runs out of money and lays off all the staff.
The Stop Killing Games website's FAQ does specifically mention this scenario:
>For existing video games, it's possible that some being sold cannot have an "end of life" plan as they were created with necessary software that the publisher doesn't have permission to redistribute. Games like these would need to be either retired or grandfathered in before new law went into effect. For the European Citizens' Initiative in particular, even if passed, its effects would not be retroactive. So while it may not be possible to prevent some existing games from being destroyed, if the law were to change, future games could be designed with "end of life" plans and stop this trend.
That is just not going to happen. Sweeping regulations that govern how games will be designed and created will just mean no game is made or sold in that market. Or this whole thing will be circumvented using malicious compliance. So end of life plan just means your client works but it stops being a playable game.
Categorisation, maybe like even how cigarettes are sold, with big warnings and steam made to add a filter that doesn’t show games with no eol support will go a long way in the market coming up with solutions than just legislation forcing creators to make games in one way.
So does that mean that games are not allowed to use proprietary technologies in their backends anymore? That seems like a strange restriction to apply only to games.
Sure they can use proprietary technology in their backend. They just need permission to redistribute it, like is currently the case with any proprietary technology used by the client.
Alternatively they can use proprietary technology in the backend without permission to redistribute, so long as it is replaced before support is ended.
They are allowed to use that. There just needs to be a way for that proprietary technology to be bought and used on a backend by a user if the company deems providing the backend to be costly to continue. That's my understanding
The game should remain playable, even if via LAN only. How that is accomplished is the responsibility of the studio, not the player - maybe they should think twice before licensing proprietary components that players cannot run themselves.
If the company fails to do this, they are effectively committing theft, and should be punished accordingly by the law. If studio execs think this it's an unreasonable thing to do, then they're free to not release their games to the public and keep their proprietary services to themselves.
It's these extreme implications why I, as a gamer and software dev, haven't signed the initiative. A lot of these things are just not feasible. And it'll be so much harder on the indie devs than on the Bungie/Blizzards.
I'm afraid if this is pushed through, the studios will just switch all online experiences to be fully subscription based. No more purchasing the game, you just pay for a month of the experience.
As a software developer do you genuinely believe that it is harder for indie game developers to build online infrastructure and pay for its hosting costs rather than build some LAN feature into the game, or to package local server binaries into the game as it was done just a few decades ago?
Most indie games I've play don't even run their own online infrastructure because of costs. Why bother, when you can just use a storefront's matchmaking for free? And storefronts provide it as a means of soft lock-in. For example one of my favorites, Deep Rock Galatic, doesn't have crossplay between the Steam PC version and the Xbox PC store version of the game.
And there's already software to emulate Steam's matchmaking because it's so common.
> neither does it expect the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state.
2 If you sell a online game you need to release the a server, not the source code but binarties, yes I know it is more work, but also lazy people were complaining that adding the mandatory "Unsubscribe" button in email is too much work or even impossible but they eventually do it and nobody dares today to defend that this rule was bad
3 make your game/software work offline, this entire thing was started because a greedy company killed a game that also had a single player mode. So if I bought Minecraft to play mostly single player I will be super pissed if Microsoft kills it because they were really special bastards that decided to force the Microsoft login to let me play the game. When they will abandon the game they need to make a final update and disable the online requirement.
4 The game will not be sold but will be "rended for N years", the customer knows what to expect and the developer is forced to keep the game working for N years from last purchase or refund the users. So if you launch soem AAA battle royale shit and it fails you refund your customers if you don't wana keep the servers running.
5 if all above is too much work then do not sell in EU or whatever USA state is demanding that you will not brick your sold software or hardware when you stop your servers because reasons.
Similarly to car manufacturers - even if you're going out of business you can't just say "not more parts" - you have to supply them for 10-15 years. In most countries if you're not willing to continue producing parts you have to transfer IP and tooling to someone who will.
Theoretically it’s the same with any asset you pay someone to make.
If you pay someone to make a chair, you don’t deduct the salary. Instead you create an asset valued at what you paid to build it, then depreciate it over time.
The arguement for this is that it would be inconsistent to do otherwise. After all, why should buying a chair from someone else be different than paying an employee to do it?
It’s worth noting that this change brings the USA in line with international financial reporting standards, so it’s not like it’s some crazy unique idea or anything.
> Theoretically it’s the same with any asset you pay someone to make.
No, it's not.
Sec. 174 explicitly and specifically refers only to software development.
Also, this:
> If you pay someone to make a chair, you don’t deduct the salary. Instead you create an asset valued at what you paid to build it, then depreciate it over time.
is also incorrect. For most tax filers, and for most things, under current law, you have a choice whether to deduct the expense in the year in which it incurred or to amortize it.
If you pay an employee to make a chair, you 100% deduct their salary, immediately. The chair is only a capital expense if you buy it from a company that sells chairs. The company selling the chairs isn't forced to amortize the salaries of their carpenters, so implying that it's normal for companies to be forced to amortize the salaries of their software engineers is, in the most generous possible interpretation, a gross misunderstanding of the law.
> this change brings the USA in line with international financial reporting standards
If you pay people to make 1000 chairs that are just sitting there, do you really think that you don’t have an asset on your books at all? This is called Inventory. It’s certainly an asset.
And an asset doesn’t come into existence out of nowhere. It comes into existence because you paid money for it. And the money you pay for it is indeed the persons salary.
Now sure, it’s possible to get away with not doing this, but it’s not correct by accounting standards to do so.
As for which standards, International Financial Reporting Standard (IFRS)
>If you pay someone to make a chair, you don’t deduct the salary.
If they make the chair. What if they only draw up blueprints for a chair that isn't manufactured? What if the chair is never manufactured, or won't be manufactured for two years? Until the software is licensed and installed at a customer site, how is this at all like making a chair?
> The arguement for this is that it would be inconsistent to do otherwise. After all, why should buying a chair from someone else be different than paying an employee to do it?
Probably exposing how little I know of accounting... If you buy a chair you have to track it and deduct it over the course of X years?! It's not just an expense the year you bought it?
Most of the time you can decide what you want to do. There are exceptions but for most capital expenses (which salary is not despite what proponents of this change would argue), you can choose to either deduct all of it or amortize it. It also depends how you categorize expenses.
A $100 chair is unlikely to get amortized, but a $100 chair as part of $450k office remodel might.
> It’s worth noting that this change brings the USA in line with international financial reporting standards, so it’s not like it’s some crazy unique idea or anything.
It seems shocking given that all the hype is around GPUs.
This probably wouldn't be true for AI specific workloads because one of the other things that happened there in the last 10 years was optimising specifically for math with lower size floats.