Hi, quick feedback: the demo is extremely short, so I can't really say much. Please generate more complicated scenes and, most importantly, inspect the wireframe. From what I could glance from the demo, the generated models are tri-based instead of quads, which would be a showstopper for me.
Because traditionally, Blender modeling works best on a clean quad-based mesh. Just look at any modeling tutorial for Blender and one of the first things you learn is to always keep a clean, quad-based topology, and avoid triangles and n-gons as much as possible, as it will make further work on the model more painful, if not impossible. That starts with simple stuff like doing a loop cut to things like uv-unwrapping and using the sculpting tools. It's also better for subdivision surface modeling. You can of course use tri-based models, but if you want to refine them manually, it's often a pain. Usually, for me it's pretty much a "take as-is or leave it" situation for tri-based meshes, and since I see these AI-created models more as a starting point rather than the finished product, having a clean quad-based topology would be very important for me.
Yes, because uv-unwrapping is much more predictable with quads, and you can place seams along edge loops. I'm by no means an expert here, maybe there are tools which make this similarly easy with non-quad topology, but at least from what I've learnt, the clean grids you get form quad-meshes are simply much easier to deal with when doing texturing.
> Yes, that's a French thing, always has been - sorry if that's the first time the author went to France and rented a private flat there ...
Not just in France, in Germany also not uncommon when renting a "Ferienwohnung". You usually also pay extra for cleaning if you stay for more than a few days. That's just how it works here, but of course it needs to be mentioned in the listing.
> Plus, why would you even need a drier mid-June in Florence (when the temperature hovers between 30C and 40C during day time...) - must be an American thing somehow.
That's indeed hilarious, but oh the horror of putting wet clothes on a drying rack. Driers are simply not that common over here. Electricity is expensive, it's not good for your clothes, plus driers are the number one electrical appliance for creating house fires, so no thank you.
Is Steam even dependent on 32bit distribution packages anymore? Doesn't it ship with its own whole set of 32bit libraries? If I look at my running Steam processes currently, it's all a huge stuff of wrappers, alongside the steamwebhelper binary, which however is 64bit. So I'm not sure at all if removing 32bit distribution packages would even be a problem nowadays. Even back then, you had to convince Steam to use system libraries by using things like STEAM_RUNTIME_PREFER_HOST_LIBRARIES=0 or similar.
Steam bundles a lot of 32-bit libraries, but still requires at least 32-bit glibc, mesa, and libgl to be provided by the system among others. AFAIK this is for ABI reasons, they wouldn't be able to ship these with Steam without also running 32-bit software in some sort of containerization runtime. I believe the compromise when Ubuntu dropped 32-bit support was that Canonical would still provide the 32-bit versions of specifically the packages required for Steam to work. I hope that Valve will work out a similar compromise with Red Hat. The problem Valve faces is that there's thousands of 32-bit Linux games on Steam, so dropping support for 32-bit means that users would see a lot of games they used to play stop working.
I will not restrict myself to an arcane subset of Make just because you refuse to type 'gmake' instead of 'make'. Parallel execution, pattern rules, order-only prerequisites, includes, not to mention the dozens of useful function like (not)dir, (pat)subst, info... There's a reason why most POSIX Makefiles nowadays are generated. It's not GNU's fault that POSIX is stale.
EDIT: There's one exception, and that would be using Guile as an extension language, as that is often not available. However, thanks to conditionals (also not in POSIX, of course), it can be used optionally. I once sped up a Windows build by an order of magnitude by implementing certain things in Guile instead of calling shell (which is notoriously slow on Windows).
Agreed. My company decided on using GNU Make on every platform we supported, which back then (last century) was a bunch of Unix variants, and Linux. That made it possible to write a simple and portable build system which could be used for everything we did, no hassle. And not difficult, because gmake was available basically everywhere, then just as now.
Completely agree. POSIX is irrelevant anyway. Every single unixlike has unique features that are vastly superior to whatever legacy happens to be standardized by POSIX. Avoiding their use leads to nothing but misery.
This could mean many different things. Like communicating well, documenting things to what you are probably assuming: answering your messages 24 hours a day instantly.
I doubt you can get a feeling for the work / life balance from this half sentence.
However, if your relative is employed and needs to type for the job, then there's a good chance the employer will pay for it if it means they can work more efficiently during these months. Another option, which however is much less likely to succeed and will probably take much longer, is to try to get this through health insurance.
There's not one word in his post where he looks down on VSTs or anything. It's just how he likes to make music, and he is unhappy with the state of modern MIDI implementations. In fact, it's the exact opposite: you are shaming him for still using MIDI.
Like C++, this cannot be parsed with a context-free grammar. The "present future" refers to
No crashes what so ever and also no distractions by updates or "phone home applications"
which is something I would guess most people would indeed see as shameful regarding our present future in software, but OTOH, this is HN, so who knows.
My old synths crashed, and required physical maintenance. In addition I lost songs that failed to read off of the crude tape backup that my Quantitizer/sequencer used.
I am much happier with my setup that I could have never have afforded outside the current future than my much lessor previous setup. This being HN I'm sure that are people that can afford to spend much more than me for gear so they might prioritize the 'minor differences' to them you list over no access at all, but I much prefer having access and price points I can actually afford.
Good for you! I also sold all my external gear many years ago and never looked back. And this is all very interesting, but has nothing to do with the topic at hand: where is he shaming people for using VSTs instead of external gear, or saying people are cheapskates for using VSTs, or that his music is better? Because he still doesn't. He says USB MIDI has tons of jitter and modern music software phones home and crashes a lot, and Cubase on his Atari does neither, which from my memory is totally true.
I have had zero issues because of jitter with my Novation SL MkII so I consider it a non-issue, and I think the tons of hits made using USB MIDI controllers speak to that. or the tons of artists that perform using Ableton. Or the tons of live artists that perform using Gig performer. Going between a piano, piano action, synth action, and un-weighted keyboards has much more impact on playing. Do we shame the old past for having different pricing options/cheapening out on keybeds?
There is so much music software that doesn't phone home it's a ridiculous focus. iLok is the biggest phone home and you can buy a USB stick to stop that so? I don't get the point?
" No crashes what so ever and also no distractions by updates or "phone home applications". It just works, distractless! Shame on the "present future".
Still doesn't make sense to me and is needlessly shaming/negative. My setup just works and is way more distract less than past hardware setup, especially if you are talking about recalling projects. I lost WAY more projects/work previously than I do now, and when things failed in the past you were hit until you could perform a hardware fix (if you could afford the cost of the fix). I'll take software.
How do you think any professional works nowadays with MIDI? A good, modern USB interface (from Focusrite or similar) has a jitter well below 1ms, usually in the range of 200µs. If that is too much, simply sync your DAW with an external, dedicated clock, which will usually give you a jitter in the single µs range.
I have a Focusrite and the MIDI timing is terrible. Sure, there is more to it then just the interface. With USB you just cannot guarantee a stable midi timing, because there is no good midi buffer implementation for it. Technically it would be possible, but no one cares.. Professionals using something like MIDI to audio converters via an VSTi plugin that takes midi signals, modulates them onto a audio signal (which can be easily buffered) and some dedicated outboard equipment converts this back to MIDI. If you are working with hardware synths, etc. this is the only option you have nowadays with non-vintage hardware. A lot of producers do not work with midi anyways, they use plugins, that's why it is some kind of a niche problem and there's not much talking about it.
First off, I'm assuming of course we are talking Mac here, because Windows is unusable for MIDI. If you have terrible MIDI timing with a Mac, then yes indeed, you'll need to sync via audio, but there are nice and inexpensive solutions for this, for instance the Midronome.
Look, I'm not trying to convince you to get rid of your Ataris, quite the contrary. I'm just disagreeing that it's impossible to have low jitter nowadays, but I fully agree that things used to be simpler before everything was done via USB.
Agreed. It is of-course not impossible, but it is almost impossible out-of-the-box (literally ;-)) I have a USAMO (Universal Sample-Accurate MIDI Output) device, but do not use it, because as I said, Atari is king here. :-) Not sure how the Midronome can solve the problem of midi notes coming inaccurate from a modern DAW? But maybe I do not understand it completly. Need to have a deeper look. Since some years I am using Linux with a Focusrite for mastering and audio tracking. Midi was bad with Linux and Windows since I got my first USB interface and went away from PCI interfaces. But this shouldn't matter too much. :-)
Note that this is an old version, I just saw the there's now the "Nome II", and at least for Mac, he has actually developed a USB protocol to provide a stable clock (which as you've already written is totally possible via USB, it's just nobody cared enough):
Thanks a lot!
The scotsman is cool and his t-shirt too. :-D T-Shirt says in German "Little pig".
Regarding "midi notes" Sim'n Tonic himself is saying this to the Midronome:
"Note that only these MIDI messages are simply forwarded when they are received, their timing is not changed. So if your DAW sends them with a lot of latency and/or jitter, the Midronome will forward them with the same latency/jitter. Actually this is a problem I plan on tackling as well [...]"
So the Midronome does not solve the problem of inaccurate midi notes coming from a modern DAW. The USAMO does by the way.. But only with one midi channel at once. And of course, coming back to the actual topic, the Atari hasn't a problem at all with accurate midi notes, it is absolutely tight at all 16 channels. So it seems there is indeed nothing comparable to the Atari nowadays. Maybe it will in the future.
Not sure if that is still accurate. This might only be available for Mac, but on the FAQ for Nome II it says this:
Can Nome II send MIDI Notes?
Nome II is like a MIDI hub, you can ask it to forward any MIDI sent over USB to one of its MIDI outputs. It will not only forward these instantly but merge them smartly with the MIDI Clock, without affecting it.
The Windows MIDI/USB stack adds considerable amount of jitter to the MIDI clock, compared to the much superior ones in MacOS. I will fully admit that "unusable" is a personal opinion based on my experience. Of course performers also use Windows, but I heavily doubt you are able to see which device in their rack acts as a master clock, and how they sync their devices, apart from the fact that most performers nowadays don't use MIDI at all.
Midi is used heavily for guitar patch and lighting automation as well as triggering backing tracks in a DAW running on stage. The use of MIDI (over USB) has only increased on stages.
This is getting ridiculous, we are talking about making music, so triggering notes from different devices in sync. You know, what MIDI was originally designed for, not triggering some lights, guitar patches or a background track. You are exactly proving my point: MIDI nowadays is pretty much reduced to SysEx for doing simple automations. None of that is seriously affected by jitter in the ms range. You sound like you have no idea how electronic music was done before VSTs were a thing.
No they don't. The formulation in TFA is a bit too generic - Debian will usually not remove any code that "calls home". There are perfectly valid reasons for software to "phone home", and yes, that includes telemetry. In fact, Debian has its own "telemetry" system:
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.
Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
I can't argue that you are wrong, but I can argue that, for myself, if I don't trust a developer to not screw me over with telemetry, I cannot trust the developer to not screw me over with their code. I can't think of a scenario where this trust isn't binary, either I can trust them (with telemetry AND code execution), or I can't trust them with either.
Could you describe what scenario I am missing?
You’re not missing anything. In general, I don’t think you can really trust the vast majority of software developers anymore. Incentives are so ridiculously aligned against the user.
If you take the next step: “do not use software from vendors you don’t trust,” you are severely limiting the amount of software you can use. Each user gets to decide for himself whether this is a feasible trade off.
Yeah, isn't that a shame? Wouldn't it be nice if instead of catastrophizing that telemetry data is always only ever there to spy on us, that we might assume that there are actually trustworthy projects out there? Especially for FOSS projects, which can usually not afford extensive in-house user testing, telemetry provides extremely valuable data to see how their software is used and where it can be improved, especially in the UX department, where many FOSS is severely lacking. This thread here is a perfect example of this kind of black/white thinking that telemetry must be ripped out of software no matter what, usually based on some fundamental viewpoint that anonymity is impossible anyway, so why bother even trying. This is not helping. I usually turn on telemetry for FOSS that offers it, because I hope they will use this to actually improve it.
Many corporate privacy policies per their customer contracts agree with this. Even a single packet regardless of contents is sending the IP address and that is considered by many companies to be PII. Not my opinion, it's in thousands of contracts. Many companies want to know every third party involved in tracking their employees. Deviating from this is a compliance violation and can lead to an audit failure and monetary credits. These policies are strictly followed on servers and less so on workstations but I suspect with time that will change.
I can only repeat myself from above: it's about what data you store and analyze. By your definition, all internet traffic would fall under PII regulations because it contains IP addresses, which would be ludicrous, because at least in the EU, there are very strict regulations how this data must be handled.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
Attempts to contact external telemetry servers under default configuration is the issue. That not all of the needlessly locally aggregated data would actually be transmitted is separate.