Hacker Newsnew | past | comments | ask | show | jobs | submit | hbridges's commentslogin

That's exactly right. The issue I ran into is that the Playdate's CPU doesn't have a lot of juice, and spends a lot of time every frame just filling the screen. There's no specialized blitting hardware or GPU. Hence realtime sprite scaling is not feasible.

I tried making some pre-scaled sprites, and the ROM footprint had to be pretty large to afford enough size steps for reasonably smooth motion as the train moved past objects. Even then, all that sprite drawing was starting to dip the frame rate.

So, I experimented with a pre-rendered approach, and that was a huge unlock. Eventually I was able to wrangle it into something that I think worked even better in the end! The pre-rendered video based approach allowed way more elaborate detail and I didn't have to worry about frame rate anymore.


> Hence realtime sprite scaling is not feasible.

Playdate got STM32F746 (ARM Cortex-M7F) at 180 MHz. 320 kB SRAM (and 16 MB of other RAM, maybe PSRAM?). DSP and saturation instructions are included.

The display is 400x240 1bpp.

Surely that's more than enough juice for sprite scaling? (Might need to copy graphics data to SRAM and even to do it in tiles for better performance.)

Although your video based implementation is pretty cool.


I double dog dare you to port doom


https://news.ycombinator.com/item?id=41013159 - Doom on Playdate (2024-07-20, 39 comments)


I'm impressed the video fits!


Do you mean Slipheed? Because that game is insane


Was curious and looked it up. I think everyone's auto correct is a bit hardcore this evening. It's Silpheed


Yes, that's the one! I can't blame autocorrect, just a regular typo


I love Densha de Go. DDG Final is one of my top 10 fav games of all time. When I got a Playdate, I had a vision of using the crank as a train's master controller and I was compelled to see it through...


Hi everybody. I'm Hunter, the developer of the game. I saw an uptick in sales over the past few hours and managed to trace it back here.

So pleasantly surprised to see Zero Zero on HN! Thank you so much for the support and I hope all you Playdate owners out there enjoy the game!!


Interestingly enough, until a few decades ago cranks were used to control the acceleration and (electric) braking in trams, electric locomotives etc. (https://de.wikipedia.org/wiki/Fahrschalter#/media/Datei:Gmun...). The rotation axis was vertical and not horizontal, and the feel and clicking noise of all the contacts opening and closing while you turn the crank would be lost, but still a Playdate would be great to simulate driving such a "classic" tram.


Great job, I’m glad someone finally made a Densha-De-Go-like! Such a great gameplay concept!

Question: is the route accurate to a real train, or is it fictional?


It's loosely based on a 6-station stretch of the Fuji Kyuukou line in Yamanashi. Not sure my environment art really did the real life area justice, but that's how I derived the stations and spacing between them. https://en.wikipedia.org/wiki/Fujikyuko_Line


Congrats, always great to see stuff built for handheld consoles

I'm surprised at the file size though, you'd think having a device with such a low res and without colors, you'd be able to shrink down assets more

Comparing to a Densha de Go rom from n64, the difference is huge and yet it had colors and 2x the res


The PlayDate game is larger because it's using prerendered video.


Hi.

I saw that there are scoreboards on the website. Does this mean that the game includes some form of user tracking? Does the website (play.date) know how often a user plays, when, for how long, how good he/she is at it, etc.? Is there a unique user ID or device ID that they can pair with past&future aquisitions in the shop?

I don't own that game and I probably never will. I'm just asking this to educate myself about what's possible in terms of invasion of privacy by using a low power console such as this one.

Thanks.


I'll answer just in case you're actually interested in having a conversation. All of the scoreboard info can be found in the API docs:

https://help.play.date/catalog-developer/scoreboard-api/

Don't want scoreboards? Don't connect to wifi. It baffles me that someone would be paranoid about this.


Presumably you need to be connected to wifi to obtain games, so that's not a great answer.

I also found the scoreboards surprising. I wouldn't expect a Gameboy-like device to be reporting things back to a server. I don't have any particular privacy concerns about it myself, but it is surprising, and I can see why people might object.

It also seems to contradict their privacy policy (https://panic.com/privacy/):

    Panic apps and products _do not_ send out _any_ private information. This includes
    ...
    Usernames
    ...
But the scoreboard API seems like it's tied to usernames...


Given the context around that line, I interpret that as applying to their actual Mac applications that deal with usernames/password/hostnames, especially since the sentence that precedes that is

> Except as described above

And further up there is a carve-out for Playdate logging.

They also don't really need to send out your username for the API call -- they already know who you are because your device is tied to your account.

I do find it surprising that there is no way to not participate in leaderboards.


> Presumably you need to be connected to wifi to obtain games

Not only that, but the API link provided by the author specifies that scores are stored in the device if Wifi is offline and uploaded later, so aparently, there's no way to block uploading unless the device is hard-reset (is that even possible?) before connecting to Wifi to get another game.

The only good thing I see is that the score board is not mandatory and it seems pretty benign if no other data is attached by the OS when scores are uploaded. But then, if scores are supported, 90% of the tracking infrastructure is already there.


You might want to read the Usage Analytics section of their privacy policy.

https://panic.com/privacy/


I really am interested and it baffles me that nobody else has privacy as a priority in their life, but down-voted my question instead.

playdate.scoreboards.addScore sends just the rank, player and value data, or is there a device ID added in the background? If not, how do they prevent spamming/cheating? Is player value user-defined? For this game only or all games on the device? Is it sent with https or just http?

Thank you.


You are probably being downvoted for derailing. This is a post about a particular game, but this thread is turning into an ideological battle unrelated to the game. Discussions regarding people's priorities and privacy concerns would be better off as a separate post.


Oh. Ok. I'll keep that in mind next time.


You read that wrong. What you're referring to is the response you get from the server in the form of callback parameters. The function signature is

    playdate.scoreboards.addScore(boardID, value, callback)
So you only send the boardID and the score. You get the player name back as part of the callback.


My point is there are fundamental computing concepts that you can pick up by learning C. In a world of high-level, low-LOC languages you can get by without learning those concepts, but it serves your and the ecosystem's best interest to learn them.


I think the disagreement we have may stem from our notions of what constitutes "fundamental computing concepts." I rank the lambda calculus much higher than C or assembly language when it comes to that. I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.

Even for the cases where low-level code must be written, I would say we need people who know assembly language and compiler theory more than we need people who know C. There is no particularly good reason for C to be anywhere in the software stack; you can bootstrap Lisp, ML, etc. without writing any C code. We need people who know how to write optimizing compilers; those people do not need to know C, nor should they waste their time with C.

Really, the most important computing concept people need to learn is abstraction. Understanding that a program can be executed as machine code, or an interpreted IR, or just interpreting an AST, and that code can itself be used to construct higher level abstractions is more important than learning any particular language.


Except that C is all over the stack that most people work in every day, and not just way down at the level of the OS.

It's astounding to me how many of the people talking about Python, Ruby, and PHP as moments of great liberation from C appear not to realize how many of the most useful libraries in these languages are really just gentle wrappers around C libraries.

Someone needs to write that not-particularly-low-level code, and someone needs to hook it up to these miraculous high-level languages. The people who do this have always been a quieter bunch than the Pythonistas, the Rubyists, the Node-nuts, and whoever else, but damn do they know what they're doing. And they certainly don't go around talking about how C is obsolete, only for device drivers, and has nothing to do with their "stack."


> There is no particularly good reason for C to be anywhere in the software stack;

Really? Not anywhere?

Who is handling your hardware interrupts? How is your keyboard and mouse input being handled? What about your video card drivers?

Now I will grant that you can bootstrap an initial run time in assembly and place your favorite high level language4 on top of that, if you add extensions to your favorite language to better interact with HW you can do everything in a higher level language, but as it stands, LISP doesn't have built in support for doing a DMA copy from memory buffer to a USB port.

My question then becomes, why the heck bootstrap in ASM rather than C?


As you said, there is no reason you cannot bootstrap in a high level language. Operating systems were written in Lisp at one time; they had device drivers, interrupts, etc.

My point is not that C is not used, but that there is no compelling technical reason to use C anywhere. The fact that Lisp and ML do not have standardized features for low-level operations is not really much of an argument. We could add those features, and we could do so with ease (CMUCL and SBCL already have low-level pointer operations and a form of inline assembly language); the only reason we do not is that nobody has time to rewrite billions of lines of C code, or perhaps more that nobody will spend the money to do such a thing. The existence of C at various levels of the software stack is a historical artifact, primarily a result of Unix having been written in C and OSes written in other languages having been marketed poorly.

The lesson is not that C is good for writing low-level code; the lesson is that technical features are not terribly important.

I would also point out that an OS is not just about interrupt handlers and device drivers. Most of an OS is high-level code that is connected to interrupt handlers and device drivers through an interface. Even if C were the best language in the world for writing low-level code, I would still question the use of C elsewhere (imagine, as an alternative, an OS that follows the design of Emacs -- a small core written in C, the rest written in Lisp).


> LISP doesn't have built in support for doing a DMA copy from memory buffer to a USB port.

Where in the ANSI/ISO C standard is this support described?


It doesn't (yet one example of why C isn't the best systems programming language), but the concepts of C (raw memory, pointers, flat buffers), map onto underlying concepts pretty clearly.

Now that said, a lot of other things (anything dealing with asynchronous programming) don't map onto C that well at all, and other languages do a much better job at solving some conceptual problems.

But that is why languages like LISP and Haskel are taught, so that even when one is stuck working in the C ghetto, higher level concepts and more abstract coding patterns can still be brought to bear to solve problems. :)


Raw memory, pointers, flat buffers exist in almost every systems program language, even strong typed ones.

My point was that what many developers think what are C features for systems programming, are in fact language extensions that most vendors happen to implement.

In this regard, the language is no better than any other that also requires extensions for the same purposes.


Agreed; OS kernels and firmware for embedded systems all require low-level access to hardware in a way that high-level desktop applications do not. Being able to easily reason about how C is going to use resources and be compiled down to machine code for the architecture you are using can sometimes be an important asset.


I think that the point is that even if you accept that the kernel level code and device drivers are all in C, from there, there's less and less benefit to doing userland code in C from there... you could use Lisp, Erlang, Scheme or a number of other languages for userland and service oriented code.

Although I really don't care for Unity, or Windows 8's UI's I do appreciate some of the directions they are going in terms of being able to create applications that are more abstracted in nature. I personally happen to like higher level languages/environments, and modern hardware has been able to handle them very well for years.

I do think that certain patterns and practices that people have followed need to be re-thought for parallelism, and that a thread/process per request in service oriented architectures has now been a bottleneck... but there are techniques, languages and platforms that can take us much farther without digging into a low-level platform language like C.

I agree that knowing C is helpful, so is knowing assembly... that doesn't mean even a small fraction of developers should be working with them on a daily basis. Most code is one-off line of business application code and related services. It doesn't need sheer performance, it needs to be done and in production sooner... the next thing needs to get done. You can't create software as quickly in C/Assembly as you can in Java/C# (or Python, Ruby, NodeJS).


I agree with you; in the cases you mentioned there don't seem to be any good arguments for not using a higher-level language with managed memory, properly implemented data structures, etc.

It seems like there are at least two threads of thought going on in the comments in general. One of them is, "does C have any role in any domain, and if so what is that domain?". I think that it does; software development is much wider than kernels, userland applications, and compilers, and there are fields where C and/or C++ are the right tools as things currently stand. I don't think anyone would argue that either language exists as a global optimum in any problem space, but from an engineering (rather than a theoretical purism) standpoint sometimes there are few practical alternatives. Maybe these domains are small, maybe they're unexciting, but they do exist.

The other is, "what is the point of learning C?". Maybe they want a deeper understanding of manual memory management, the concept of stack and heap storage, pointer manipulation, etc. Learning more about C to play with these concepts isn't a terrible idea, although it's not the only way to learn about these things. If nothing else, learning C and trying to implement your own parsers or data structures might be a good way to better understand why writing correct code in C that accounts for buffer overflows and string issues is so difficult, and what challenges higher-level languages face in order to overcome these flaws.


Foregoing application performance adds up to a lot of money for the likes of Google and Facebook in terms of server cost, cooling, size. Maybe Go will displace C at Google but I imagine only when it reaches performance parity.


> I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.

Except that most modern data structure research goes deep into cache awareness (i.e. structures that respect cache lines and algorithms that prevent cache misses and avoid pipeline stalling), which requires understanding of the hardware and the instruction set.

Knowing your Big-O-stuff is a prerequisite for modern algorithm design; it does not take you anywhere new, though.


Knowing your data structures does not mean being on the cutting edge of data structures research. It does mean knowing more than just vectors, lists, and hash tables. It means choosing the right data structures for your problem -- something that can have profound effects on performance, much more so than the cache.

Yes, people should know about the machine their code is running on, because when there are no asymptotic improvements to be made, constant factors start to matter. Right now, though, people tend to choose asymptotically suboptimal data structures and algorithms. Worrying about things like pipeline stalling when an algorithmic improvement is possible is basically the definition of premature optimization.


Might could do that! Actually one of the biggest mysteries to me for a while was how header files worked, how the compiler finds symbols, build dependencies, linking libs, etc. Having to mess around with Makefiles helps you digest those things, but then Xcode treats them in an entirely different way. #include is a complex beast


#include is actually a very simple beast, once you realize that it's much easier to digest.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: