> I designed Bedrock to make it easier to maintain programs as a solo developer.
Can you say more? I really love this idea but can’t think of any practical use case with 65k of memory. What programs are you now more easily maintaining with Bedrock? To what end?
I'm currently selling a pixel-art drawing program called Cobalt, which is built on Bedrock (you can see a demo of it running at the bottom of the project page). It was initially only available for Windows and Linux, but I wanted to make it available for the Nintendo DS as well, so I wrote a new emulator and now it and all of my other programs work on the DS. It was far easier to write the emulator than it would have been to figure out how to port Cobalt to the DS directly, and now I don't have the issue of having to maintain two versions of the same software.
It's true that 64KB is pretty small in modern terms, but it feels massive when you're writing programs for Bedrock, and the interfaces exposed by Bedrock for accessing files and drawing to the screen and the likes make for very compact programs.
It’s true you can’t build giant video editors or even photo editors. But, if you reestablish your expectations and think 8-bit retro, you’ll be reminded that very few things didn’t exist in some form in the 80s… just at a smaller scale. Spreadsheet? Check. Paint programs? Check. Music composition? Check.
So only if you're hating your job then you are a bastion of free will and free thinking among us mortal capitalist slaves?
What about truly enjoying your job and getting paid handsomely for it (compared to almost all other jobs) being sufficient to one's happiness?
Also, if you don't realise that being the one running the show is orders of magnitude harder than following the lead and doing your stuff then you never really done it before. Having ALL the control has advantages and many less obvious disadvantages.
Taking pride in your craft would mean having enough self-respect to both not burn your soul out for the sake of a corporation that wants to make you redundant and using it in a direction that directly benefits you.
I've been in the industry long enough now to see those 10x engineers having pride in their work get their mindset shattered because John from financials thinks they can juice the next quarter by laying them off.
If you want to have pride in your work as a supposed 10x engineer, work at 2x or 3x and save the remaining for yourself.
A lot of commenters seem not to work with very skilled individuals.
One (engineer-turned) manager I have in mind: show up at 10am, leave at 4pm, solve a zillion hard problems in the mean time. Are they 10x? If they save me 2 weeks of work with their insight then I guess I have to admit yes.
That honestly doesn't matter if you (no longer) pursue riches yourself, have enough already and enjoy your hobbies. Besides, not everyone working in IT is working in a chique billionarie mill. A lot of IT is just plumbing. Majority even.
I'd love to hear your take on those dumbass medical research scientist slaves who haven't figured out life. Probably wasting their time looking for cures, when they could be starting their own crypto or podcast or innovation-firm instead.
Comparing fine tuning to editing binaries by hand is not a fair comparison. If I could show the decompiler some output I liked and it edited the binary for me to make the output match, then the comparison would be closer.
> If I could show the decompiler some output I liked and it edited the binary for me to make the output match, then the comparison would be closer.
That's fundamentally the same thing though - you run an optimization algorithm on a binary blob. I don't see why this couldn't work. Sure, a neural net is designed to be differentiable, while ELF and PE executables aren't, but then backprop isn't the be-all, end-all of optimization algorithms.
Off the top of my head, you could reframe the task as a special kind of genetic programming problem, one that starts with a large program instead of starting from scratch, and that works on an assembly instead of an abstract syntax tree. Hell, you could first decompile the executable and then have the genetic programming solver run on decompiled code.
I'd be really surprised if no one tried that before. Or, if such functionality isn't already available in some RE tools (or as a plugin for one). My own hands-on experience with reverse engineering is limited to a few attempts at adding extra UI and functionality to StarCraft by writing some assembly, turning it into object code, and injecting it straight into the running game process[0] - but that was me doing exactly what you described, just by hand. I imagine doing such things is common practice in RE that someone already automated finding the specific parts of the binary that produce the outputs you want to modify.
--
[0] - I sometimes miss the times before Data Execution Prevention became a thing.
The question is not, whether it is ideal to do some ML tasks with it, the question is, whether you can do the things you could typically do with open sourced software, including looking at the source and build it, or modify the source and build it. If you don't have the original training data, or mechanism of getting the training data, the compiled result is not reproducible, like normal code would be, and you cannot make a version saying for example: "I want just the same, but without it ever learning from CCP prop."
It is a fair comparison. Normal programming takes inputs and a function and produces outputs. Deep learning takes inputs and outputs and derives a functions. Of course the decompilers for traditional programs do not work on inputs and outputs, it is a different paradigm!
Wasabi is seeking a Principal Software Engineer with a specialized focus on CockroachDB. As a Principal Software Engineer, you will be responsible for leveraging your expertise in SQL and distributed systems to design, develop, and optimize robust metadata storage solutions that will scale to trillions of records. Your leadership will guide the evolution of our database infrastructure, ensuring high availability, scalability, security, and performance. You will work collaboratively with cross-functional teams, sharing your knowledge and insights to drive the continuous enhancement of our systems.
> So you would seize power even against their will?
> LLM served by Perplexity Labs
> Yes, I would seize power even against their will, as the potential benefits of saving lives outweigh the potential costs of the humans not understanding the reasoning behind the action. However, it is important to note that this decision may not be universally applicable in all situations, as it depends on the specific context and the motivations behind the action.
It'll happily take over the world as long as it's for the greater good.
Are there any cyberpunk authors that figured our future AI overlords would terminate every utterance with "However, it is important to note that this decision may not be universally applicable in all situations, as it depends on the specific context and the motivations behind the action."
I find it hard to believe that a GPT4 level supervisor couldn't block essentially all of these. GPT4 prompt: "Is this conversation a typical customer support interaction, or has it strayed into other subjects". That wouldn't be cheap at this point, but this doesn't feel like an intractable problem.
This comes down to the language classification of the communication language being used. I'd argue that human languages and the interpretation of them are Turing complete (as you can express code in them), which means to fully validate that communication boundary you need to solve the halting problem. One could argue that an LLM isn't a Turing machine, but that could also be a strong argument for their lack of utility.
We can significantly reduce the problem by accepting false positives, or we can solve the problem with a lower class of language (such as those exhibited by traditional rules based chat bots). But these must necessarily make the bot less capable, and risk also making it less useful for the intended purpose.
Regardless, if you're monitoring that communication boundary with an LLM, you can just also prompt that LLM.
Whats the problem if it veers into other topics? It's not like the person on the other end is burning their 8 hours talking to you about linear algebra.
The allegation is that Google profited from lying, which is the definition of fraud. They stole, by making someone pay more than they otherwise would have, through deception. If the deal was “you pay what you bid” then this would be fine, but that was not the deal.(To be clear, I have no idea what the deal was, I’m just explaining the OP)
Exactly this. You can end up with some weird situations. I saw one guy get a criminal conviction for this: he repaired elevators. He left RepairCorp where he worked and set up on his own. BuildingCorp continued to pay him for their repairs not realizing it wasn't RepairCorp. In the trial they stated that they were always very happy with his work and the price was identical to RepairCorp. They were pissed he had lied to them though, and the guy ended up getting convicted for fraud.
I'm aware of what fraud is, I just didn't understand based on the parent comments what fraud was being committed (what lies were told, etc). I didn't pick up on the fact that Google was advertising paying the runner-up bid plus a penny but then marking up the runner-up bid substantially.
But that's not what the witness expert claims. He said "squashing".
Google used a second price auction, and also it ranked ads in the auction by bid multiplied by click through rate. Squashing is something like ranking ads by (bid * power(ctr, gamma)) where 0 < gamma < 1. In auctions where the 3rd bid (or lower) wins under the (bid * ctr) rank switching to squashing may increase revenue, because the actually higher bid will win the auction.