It reassures me that going with an Onyx Boox Tab Pro was the better choice. Full e-ink, not e-Paper, but still can switch to a mode where you could, if you want, watch a movie. Backlight is not crazy orange, it's what you want (warmth controlled separately from brightness). Ignores hand when you use the stylus, or at least, uses hands gestures for different things than the stylus and I never had the issue. Bluetooth keyboard works OK.
And it's Android (that's a plus for me, you have good software variety), with Google Services.
I love my Boox (nova C color). Draw on it, write hundreds of pages of journals on it longhand, read essays, newspapers and comics and books on it daily. For the past couple years. I gave up other tablets once I got it.
What it is NOT good at is doom scrolling, social media, or video. The format and refresh rate actively discourage that... and also, the battery may go all day with wifi off, but it drains pretty quickly once you're online. It is definitely an offline device, with the full range of Android functionality (and amazing offline handwriting recognition).
I take it out to read and write for hours every evening, and don't carry anything else. Bar none the best device I've ever owned for mixing creative and literary pursuits and turning my back on the shittified internet.
They look like good products, but my daughter had a really bad experience with them - she bought one, it arrived with the screen broken, and they refused to accept that was possible and said she must have broken it and refused a refund.
Unfortunately Onyx Boox have used sockpuppets and other dirty tricks on Reddit[0] and elsewhere to harass and deter users reporting broken screens. Their international partners have webpages explaining how it's "impossible" that e-ink screens can be damaged or broken without you dropping or sitting on them[1] And in general the company is hostile to anyone with damaged devices or issues of other sorts.
Although no-one is perfect, I really like Supernote and their way of developing as much as possible in the open[2]. The devices are really great to use[3]
Throwing another vote out there for Supernote. Very responsive, particularly around handwriting - which feels flawless. Been a perfect device for me, but more limited than a Boox. That’s fine for my use-case, but still worth calling out.
If you're using a credit card for its benefits (like buyer protection, which I have several times over the years) and have the money to pay it off immediately, it's not pointless debt, but a net benefit to the purchaser.
I'm also in the Onyx BOOX camp (Max Lumi, going on my fourth year, numerous previous mentions/discussions in my HN history). Most of the observations on the Daylight Computer, particularly about readability under direct sunlight (the exact opposite of all other portable devices I've used, save the similarly B&W LCD-based Palm Pilot ... oh, a quarter century ago now) really is quite nice.
Other features seem pretty comparable with the DC. I suspect DC's high-speed refresh beats E-ink, but that E-ink's persistence, resolution, and clarity are strong counterpoints.
The observations on B&W v. colour are interesting, and mostly match my own experience. Coding (in vim under Termux) takes me back to monochrome screen days (though those were largely green/amber rather than B&W), and the loss of colour syntax highlighting can be somewhat jarring (though I remember finding the garishness of it being initially offputting when I'd first begun using it). I find the Web much less distracting in B&W, and only rarely miss colour, though for some data presentation (e.g., graphs and charts) it can be conspicuous by its absence). I'd like to try a colour e-ink device at some point.
For a device that maximises portability, preserves battery life, functions spectacularly in all lighting conditions (though diffuse overhead lighting tends toward glare), and is principally aimed at reading / listening / notetaking, with light technical work (largely under Termux w/ a Bluetooth keyboard) I strongly recommend the form-factor, and would suggest exploring either e-ink or e-paper depending on specific preferences, the key distinction likely being the refresh/persistence aspect noted previously.
I treat mine (Max Lumi, about 3 years old now) as an insecure device. There are (very nearly) no account-based services on them (and occasional use of SSH roughly doubles that count), and that which I do use is to a pseudonymous service which ... I've soured on sufficiently that I use it little if any. The device is almost wholly dedicated to consumption, largely e-books, podcasts, and websites. That's a nonzero vulnerability surface, but it's pretty close to nil.
And for those purposes, the device is quite satisfactory.
My Boox is pretty crap. My use case is Libby for library books and the display refreshing and other things make it almost unusable. Feels super cheap, unsupported.
Wow. This is so different from my experience with my Boox. I like the add ons they made to improve the Android UI for e-ink purposes, and every weekly update makes it better. Feels extremely well supported to me.
What things that were previously done in assembly do you mean?
I remember Delphi nicely allowed including C++ files and assembly in the project too, so there's that. It is a very wordy language, but apart from the need to separate declarations and implementation, was pretty nice to use, and powerful.
Many kinds of byte or bit-level operations. I don't have access to much Turbo Pascal code, but it was always Pascal with a lot of inline assembly for specific things. Just found this using google, for example, https://github.com/nickelsworth/swag/blob/7c21c0da2291fc249b...
Do you also feel like you did projects in your teen years that you wouldn't pull off today?
I coded assembly (Z80) in a disk editor, having no assembler, just a handwritten list of opcodes.
I wrote a chat client app (in Delphi) HAVING NO INTERNET at home, I went to a cafe, recorded a dump with Wireshark, printed it on a dot matrix printer, and used a marker pen to highlight the bytes I understand the meaning of.
Yup. I wrote a TSR program to extend Deluxe Paint II on PC with animation features. Never heard about APIs, and simply copied data from the VGA memory, overlaying my own user interface on top of DP.
The thought of doing this with contemporary systems just seems ... wrong :)
I wrote lots of TSR extensions, I remember one that would allow you to switch between DOS programs.... it broke when the next version of DOS came out.
I was most fond of my image viewer program that worked only on my video card (a cirrus logic) but was 10x faster then others because it used some hw acceleration on the card (not remembering details right now).
It's basically impossible to replicate this level on todays extra-locked-down machines - the bar to entry and just tinker is so much higher.
It's all still available, arguably more accessible from a cost and documentation point of view - I'd argue that the bar to getting something is so much higher.
You can today start banging registers on GPUs, or write your own shader programs, there's well documented modern hardware (e.g. [0]) and completely open-source drivers just sitting there for reference. The issue is that there's so moving parts now that to get anything visible to actually happen it's not just wiring up a couple of registers, but thousands just to get it into a state you can even see the registers you want, then a thousand more.
Hah, I spent countless hours poking bits around in the Cirrus Logic card that I had.
I found an undocumented (to me, at least) mode that showed 16-bit images by alternating two 8-bit images. I put it in a demo to show at a demoparty, but the projector system did not have a compatible VGA card :(
Well back then we didn’t have all of that memory for those fancy abstraction layers. Sticking your finger in the framebuffer probably wasn’t the worst way to do that!
My favorite program as a teen came about because I had gotten a Sound Blaster (in my 8088 IBM PC), and you could order C/MS chips (they were actually a pair of Philips SAA1099 chips, originally used in the Create Music System aka Game Blaster).[0] Since it came with documentation on how to directly program the hardware registers, and I enjoyed coding music output (I still do enjoy that kind of thing), I ended up writing a program in C that could drive all 12 voices on the chips using the same notation as used by Advanced BASIC "PLAY" statements[1], but added a "V" command to let you select a voice. (BASIC "PLAY" command only supported 1 voice. Maybe 3 on the PCjr.?) It would keep track of how much time passed in each voice while compiling the information so that you could line everything up, and then would use that information to play everything at the right time when the music played back. I remember I had a big case or if/else section depending on the next character, so I must have worked out some sort of state machine without knowing what that was at the time. I took the conductor's score for music we were playing in band class and entered that into a file for the program to play the multi-voice version on my computer. :)
and here's a fun video showing the upgrade with a bit of music from The Secret of Monkey Island towards the end. Like the video creator said, I also like how it sounds similar to the GameBoy: https://www.youtube.com/watch?v=3ZtyP9tE3Ug
I have thought about this too, and think there. were mainly 4 things that are different now
1. As a teen, I had a lot more free time and could spend hours every night on fun stuff. School was easy and I didn't spend much time on homework
2. As a teen, I also did not have a day job programming too. My head is kinda tired when I come home, so unless it's a really interesting project I will probably just watch some TV
3. As a teen, I did not know about good code practices and could just throw something together and see if it worked. For some reason I now find it hard not to spend time on a good code structure, architecture and best practices. I am too "advanced" to just push out crappy code quickly
4. As an adult, I know more about the alternatives out there, so I'm thinking that there's no point in creating something when there are alternatives.
Kids these days have much less exposure to boredom. Anywhere you are, you can get lost in TikTok or other social media feeds in three seconds.
As a parent I struggle with this a lot, trying to teach some patience to my children who seem conditioned to fiddle with their phones whenever possible. And of course I'm often not much better myself.
That's very relatable. I don't really have any specific project I could mention that would make anyone's jaw drop. But if I look back at how little I knew compared to now, and the fact that I completely self-taught myself everything, I did some insane stuff for my age. I was so determined and could lose myself for weeks into a project just because I was so positively driven. I could do the same stuff now, much better too. But I wouldn't have the same drive and inspiration to come up with something and build it to perfection.
One thing that comes to mind is a bot I made that played bejeweled blitz. It worked by taking screenshots and calculated the optimal move, and moved the mouse to make those moves. I was 13 and wrote it in VB6.
In highschool I wrote software in 6502 assembly and Turbo Pascal to send files from Atari through joystick ports into lpt port on pc to transfer Atari games and run them on emulator on pc. I had no idea about protocols so I was using one communication line to signal that there's new data on others. Also pre-internet times.
I also wrote few games on Atari when I was even younger, but since I got pc in highschool I haven't been able to make any decent progress on (let alone finish writing) a single game on PC. Despite writing a lot of software professionally as an adult I never had the motivation, that I had as a child, to go beyond shallow, small experiments for my personal coding.
My Atari 2600 had an odd behavior with the reset switch. Holding it down at the beginning of the tank/combat game enabled your first shot to go through walls. Did you any Atari bugs while you worked on it? Pretty amazing that you could program these without the benefit of searching any internet for answers. (btw, I replied to you elsewhere :-P)
I wrote a basic bootable 32-bit OS and an editor in x86 assembly :) all without internet and by poring through the Intel manuals and the "Undocumented..." series of books.
I noticed I missed it and so I went back to it a few years ago. I have old systems (mostly from the 70-90s; a few newer because my love for Sun Sparc & pre-Intel SGI) and newer ones like RISC-V boards and low-powered small devices to thinker with. None of this has any commercial value but it feels so much better than the LLM grifting / code slinging for profit. I wish I found a way to combine them, but I feel it's not possible.
One of my "some day" / "thought experiment" projects is to create a small RISC-V RV32EC processor that would only be able to access 64k of its memory space, but would fit on a 40-pin IC: 16 pins of address, 8 pins of data, some interrupt and timing pins, etc. Basically like a 6502 or 8080/Z80 with the RV ISA. I can already think of some "fun" issues like requiring four read/write cycles per load/store operation.
It is meant as a helper processor in larger FPGA projects. These need less than 64KB of memory. drv16 is about the same size as the tiniest RISC-V (SERV, Glacial) without the huge performance penalties these have.
Around 1996/97 timeframe, you could fit a kernel and userspace. I remember building a 1.44 setup that booted a compressed kernel and had enough user space tooling to bring up telnet, ftp, and the radio stack to drive the long haul radio cards (we were replacing JNOS [0] IIRC at an ISP ). Even had writable space at the end of the floppy (the kernel etc were readonly) to write overrides of the config; poor man's overlay I think, but it is rather a long time ago.
Given we were working with 286 era hardware (maybe 386?), I'd be surprised if ELKS doesn't fit on a 1.44. Indeed, simply looking at the downloads page linked from the original link would have answered your question.. [1]
We don't support a compressed kernel anymore but have a way to compress user executables which saves about 30%. Sadly, even with straightforward decompression, that process on ancient 8088's sometimes takes longer than reading an uncompressed executable. But we're finally at the point of having "too much stuff" to fit on a floppy. And of course everyone wants games. (Don't ask about Doom: yes we have it, no it doesn't fit on a floppy :)
Check NeHaBodi. Nethack 3.4.3 or Slashem 0.7f might compile under elks, slashem itself shouldn't need ncurses, but it would need some colour support to differentiate some mons (not as a requeriment, but it helps a lot on the gameplay).
I'm 99% sure the original kernel and thus the boot&root disk Linux (the original "distribution" I guess) only ran on the 386 & up as it required & used the memory management capabilities.
There were some forks that could run without the mmu (micro-Linux I think?) but as I recall it they came quite a bit later.
We dropped all support for 286 or 386+ protected mode/paging etc, as well as produce only 8088/8086 instructions so it'll run on any x86 (including the PCjr with its peculiarities, remember that?) running in real mode only without MMU. Of course, that means any program can write anywhere, so more care is taken towards program correctness, which is kind of fun.
Yes it runs on that Amstrad. A funny story about the Amstrad, at one point we added divide-by-zero trap handler in the kernel for user space apps. When the Amstrad reboots via our 'shutdown', its gets a div zero exception in its own BIOS (which at the time prohibited the reboot, lol).
Open up an issue on GitHub and we can help you better. The usual reason for problems like this is the layout of the image is incorrect on GoTek, and/or the GoTek is set to the wrong CHS (cylinders/head/sector) for the image being used.
Yes, one can actually boot and run a minimal system on 360k floppy, but if you want networking you need 720k. The native compiler might fit on 1440k but needs a lot more for any development if the C library header files etc are wanted.
Interesting. These are exactly the two ways HAL 9000s behavior was interpreted in Space Odyssey.
Many people simply believed that HAL had its own agenda and that's why it started to act "crazy" and refuse cooperation.
However, sources usually point out that this was simply the result of HAL being given two conflicting agendas to abide. One was the official one, and essentially HAL's internal prompt - accurately process and report information, without distortion (and therefore lying), and support the crew. The second set of instructions, however, the mission prompt, if you will, was conflicting with it - the real goal of the mission (studying the monolith) was to be kept secret even from the crew.
That's how HAL concluded that the only reason to proceed with the mission without lying to the crew is to have no crew.
Clarke directly says it briefly in the novel version of 2001 and expanded on it in 2010, excerpted below:
"... As HAL was capable of operating the ship without human assistance, it was also decided that he should be programmed to carry out the mission autonomously in the event of the crew's being incapacitated or killed. He was therefore given full knowledge of its objectives, but was not permitted to reveal them to Bowman or Poole.
This situation conflicted with the purpose for which HAL had been designed - the accurate processing of information without distortion or concealment. As a result, HAL developed what would be called, in human terms, a psychosis - specifically, schizophrenia. Dr C. informs me that, in technical terminology, HAL became trapped in a Hofstadter-Moebius loop, a situation apparently not uncommon among advanced computers with autonomous goal-seeking programs. He suggests that for further information you contact Professor Hofstadter himself.
To put it crudely (if I understand Dr C.) HAL was faced with an intolerable dilemma, and so developed paranoiac symptoms that were directed against those monitoring his performance back on Earth. He accordingly attempted to break the radio link with Mission Control, first by reporting a (non-existent) fault in the AE 35 antenna unit.
This involved him not only in a direct lie - which must have aggravated his psychosis still further - but also in a confrontation with the crew. Presumably (we can only guess at this, of course) he decided that the only way out of the situation was to eliminate his human colleagues - which he very nearly succeeded in doing. ..."
Ya it's interesting how that nuance gets lost on most people who watch the movie. Or maybe the wrong interpretation has just been encoded as "common knowledge", as it's easier to understand a computer going haywire and becoming "evil".